# probsolve_ivp¶

probnum.diffeq.probsolve_ivp(ivp, method='ekf0', which_prior='ibm1', tol=None, step=None, firststep=None, precond_step=None, **kwargs)[source]

Solve initial value problem with Gaussian filtering and smoothing.

Numerically computes a Gauss-Markov process which solves numerically the initial value problem (IVP) based on a system of first order ordinary differential equations (ODEs)

$\dot x(t) = f(t, x(t)), \quad x(t_0) = x_0, \quad t \in [t_0, T]$

by regarding it as a (nonlinear) Gaussian filtering (and smoothing) problem . For some configurations it recovers certain multistep methods . Convergence rates of filtering  and smoothing  are comparable to those of methods of Runge-Kutta type.

This function turns a prior-string into an ODEPrior, a method-string into a filter/smoother of class GaussFiltSmooth, creates a GaussianIVPFilter object and calls the solve() method. For advanced usage we recommend to do this process manually which enables advanced methods of tuning the algorithm.

This function supports the methods: extended Kalman filtering based on a zero-th order Taylor approximation (EKF0), extended Kalman filtering (EKF1), unscented Kalman filtering (UKF), extended Kalman smoothing based on a zero-th order Taylor approximation (EKS0), extended Kalman smoothing (EKS1), and unscented Kalman smoothing (UKS).

Parameters:
• ivp (IVP) – Initial value problem to be solved.
• step (float) – Step size $$h$$ of the solver. This defines the discretisation mesh as each proposed step is equal to $$h$$ and all proposed steps are accepted. Only one of out of step and tol is set.
• tol (float) – Tolerance $$\varepsilon$$ of the adaptive step scheme. We implement the scheme proposed by Schober et al., accepting a step if the absolute as well as the relative error estimate are smaller than the tolerance, $$\max\{e, e / |y|\} \leq \varepsilon$$. Only one of out of step and tol is set.
• which_prior (str, optional) –

Which prior is to be used. Default is an IBM(1), further support for IBM($$q$$), IOUP($$q$$), Matern($$q+1/2$$), $$q\in\{1, 2, 3, 4\}$$ is provided. The available options are

 IBM($$q$$) 'ibm1', 'ibm2', 'ibm3', 'ibm4' IOUP($$q$$) 'ioup1', 'ioup2', 'ioup3', 'ioup4' Matern($$q+0.5$$) 'matern32', 'matern52', 'matern72', 'matern92'

The type of prior relates to prior assumptions about the derivative of the solution. The IBM($$q$$) prior leads to a $$q$$-th order method that is recommended if little to no prior information about the solution is available. On the other hand, if the $$q$$-th derivative is expected to regress to zero, an IOUP($$q$$) prior might be suitable.

• method (str, optional) –

Which method is to be used. Default is ekf0 which is the method proposed by Schober et al.. The available options are

 Extended Kalman filtering/smoothing (0th order) 'ekf0', 'eks0' Extended Kalman filtering/smoothing (1st order) 'ekf1', 'eks1' Unscented Kalman filtering/smoothing 'ukf', 'uks'

First order extended Kalman filtering and smoothing methods require Jacobians of the RHS-vector field of the IVP. The uncertainty estimates as returned by EKF1/S1 and UKF/S appear to be more reliable than those of EKF0/S0. The latter is more stable when it comes to very small steps.

• firststep (float, optional) – First suggested step $$h_0$$ for adaptive step size scheme. Default is None which lets the solver start with the suggestion $$h_0 = T - t_0$$. For low accuracy it might be more efficient to start out with smaller $$h_0$$ so that the first acceptance occurs earlier.
• precond_step (float, optional) – Expected average step size, used for preconditioning. See ODEPrior for details. Default is precond_step=1.0 which amounts to no preconditioning. If constant step size, precond_step is overwritten with the actual step size to provide optimal preconditioning.
Returns:

solution – Solution of the ODE problem.

Contains fields:

t : np.ndarray, shape=(N,)

Mesh used by the solver to compute the solution. It includes the initial time $$t_0$$ but not necessarily the final time $$T$$.

y : list of RandomVariable, length=N

Discrete-time solution at times $$t_1, ..., t_N$$, as a list of random variables. The means and covariances can be accessed with solution.y.mean and solution.y.cov.

Return type:

ODESolution

GaussianIVPFilter()
Solve IVPs with Gaussian filtering and smoothing
ODESolution()
Solution of ODE problems

References

  Schober, M., Särkkä, S. and Hennig, P.. A probabilistic model for the numerical solution of initial value problems. Statistics and Computing, 2019.
  Kersting, H., Sullivan, T.J., and Hennig, P.. Convergence rates of Gaussian ODE filters. 2019.
  Tronarp, F., Kersting, H., Särkkä, S., and Hennig, P.. Probabilistic solutions to ordinary differential equations as non-linear Bayesian filtering: a new perspective. Statistics and Computing, 2019.
  Tronarp, F., Särkkä, S., and Hennig, P.. Bayesian ODE solvers: the maximum a posteriori estimate. 2019.

Examples

>>> from probnum.diffeq import logistic, probsolve_ivp
>>> from probnum import random_variables as rvs
>>> initrv = rvs.Dirac(0.15)
>>> ivp = logistic(timespan=[0., 1.5], initrv=initrv, params=(4, 1))
>>> solution = probsolve_ivp(ivp, method="ekf0", step=0.1)
>>> print(solution.y.mean())
[[0.15      ]
[0.2076198 ]
[0.27932997]
[0.3649165 ]
[0.46054129]
[0.55945475]
[0.65374523]
[0.73686744]
[0.8053776 ]
[0.85895587]
[0.89928283]
[0.92882899]
[0.95007559]
[0.96515825]
[0.97577054]
[0.9831919 ]]

>>> initrv = rvs.Dirac(0.15)
>>> ivp = logistic(timespan=[0., 1.5], initrv=initrv, params=(4, 1))
>>> solution = probsolve_ivp(ivp, method="eks1", which_prior="ioup3", step=0.1)
>>> print(solution.y.mean())
[[0.15      ]
[0.20795795]
[0.28228416]
[0.36926   ]
[0.46646494]
[0.5658788 ]
[0.66048505]
[0.74369892]
[0.81237394]
[0.86592791]
[0.90598293]
[0.9349573 ]
[0.95544749]
[0.96968754]
[0.97947631]
[0.98614541]]