multilevel_bayesquad_from_data

probnum.quad.multilevel_bayesquad_from_data(nodes, fun_diff_evals, kernels=None, measure=None, domain=None, options=None)

Infer the value of an integral from given sets of nodes and function evaluations using a multilevel method.

In multilevel Bayesian quadrature, the integral \(\int_\Omega f(x) d \mu(x)\) is (approximately) decomposed as a telescoping sum over \(L+1\) levels:

\[\int_\Omega f(x) d \mu(x) \approx \int_\Omega f_0(x) d \mu(x) + \sum_{l=1}^L \int_\Omega [f_l(x) - f_{l-1}(x)] d \mu(x),\]

where \(f_l\) is an increasingly accurate but also increasingly expensive approximation to \(f\). It is not necessary that the highest level approximation \(f_L\) be equal to \(f\).

Bayesian quadrature is subsequently applied to independently infer each of the \(L+1\) integrals and the outputs are summed to infer \(\int_\Omega f(x) d \mu(x)\). [1]

Parameters:
  • nodes (Tuple[ndarray, ...]) – Tuple of length \(L+1\) containing the locations for each level at which the functionn evaluations are available as fun_diff_evals. Each element must be a shape=(n_eval, input_dim) np.ndarray. If a tuple containing only one element is provided, it is inferred that the same nodes nodes[0] are used on every level.

  • fun_diff_evals (Tuple[ndarray, ...]) – Tuple of length \(L+1\) containing the evaluations of \(f_l - f_{l-1}\) for each level at the nodes provided in nodes. Each element must be a shape=(n_eval,) np.ndarray. The zeroth element contains the evaluations of \(f_0\).

  • kernels (Tuple[CovarianceFunction, ...] | None) – Tuple of length \(L+1\) containing the kernels used for the GP model at each level. See Notes for further details. Defaults to the ExpQuad kernel for each level.

  • measure (IntegrationMeasure | None) – The integration measure. Defaults to the Lebesgue measure.

  • domain (DomainLike | None) – The integration domain. Contains lower and upper bound as scalar or np.ndarray. Obsolete if measure is given.

  • options (dict | None) –

    A dictionary with the following optional solver settings

    scale_estimationOptional[str]

    Estimation method to use to compute the scale parameter. Used independently on each level. Defaults to ‘mle’. Options are

    Maximum likelihood estimation

    mle

    jitterOptional[FloatLike]

    Non-negative jitter to numerically stabilise kernel matrix inversion. Same jitter is used on each level. Defaults to 1e-8.

Returns:

  • integral – The integral belief subject to the provided measure or domain.

  • infos – Information on the performance of the method for each level.

Raises:

ValueError – If nodes, fun_diff_evals or kernels have different lengths.

Warns:

UserWarning – When domain is given but not used.

Return type:

Tuple[Normal, Tuple[BQIterInfo, …]]

Notes

The tuple of kernels provided by the kernels parameter must contain distinct kernel instances, i.e., kernels[i] is kernel[j] must return False for any \(i\neq j\).

Warning

Currently the method does not support tuning of the kernel parameters other than the global kernel scale. Hence, the method may perform poorly unless the kernel parameters are set to appropriate values by the user.

References

Examples

>>> import numpy as np
>>> input_dim = 1
>>> domain = (0, 1)
>>> n_level = 6
>>> def fun(x, l):
...     return x.reshape(-1, ) / (l + 1.0)
>>> nodes = ()
>>> fun_diff_evals = ()
>>> for l in range(n_level):
...     n_l = 2*l + 1
...     nodes += (np.reshape(np.linspace(0, 1, n_l), (n_l, input_dim)),)
...     fun_diff_evals += (np.reshape(fun(nodes[l], l), (n_l,)),)
>>> F, infos = multilevel_bayesquad_from_data(nodes=nodes,
...                                           fun_diff_evals=fun_diff_evals,
...                                           domain=domain)
>>> print(np.round(F.mean, 4))
0.7252