probnum.linalg.bayescg(A, b, x0=None, maxiter=None, atol=1e-05, rtol=1e-05, callback=None)

Bayesian Conjugate Gradient Method.

In the setting where \(A\) is a symmetric positive-definite matrix, this solver takes prior information on the solution and outputs a posterior belief over \(x\). This code implements the method described in Cockayne et al. 1.

Note that the solution-based view of BayesCG and the matrix-based view of problinsolve() correspond 2.

  • Ashape=(n, n) – A symmetric positive definite matrix (or linear operator). Only matrix-vector products \(Av\) are used internally.

  • bshape=(n, ) – Right-hand side vector.

  • x0shape=(n, ) – Prior belief for the solution of the linear system.

  • maxiter (Optional[int]) – Maximum number of iterations. Defaults to \(10n\), where \(n\) is the dimension of \(A\).

  • atol (float) – Absolute residual tolerance. If \(\lVert r_i \rVert = \lVert b - Ax_i \rVert < \text{atol}\), the iteration terminates.

  • rtol (float) – Relative residual tolerance. If \(\lVert r_i \rVert < \text{rtol} \lVert b \rVert\), the iteration terminates.

  • callback (Optional[Callable]) – User-supplied function called after each iteration of the linear solver. It is called as callback(xk, sk, yk, alphak, resid, **kwargs) and can be used to return quantities from the iteration. Note that depending on the function supplied, this can slow down the solver.



Cockayne, J. et al., A Bayesian Conjugate Gradient Method, Bayesian Analysis, 2019, 14, 937-1012


Bartels, S. et al., Probabilistic Linear Solvers: A Unifying View, Statistics and Computing, 2019

See also


Solve linear systems in a Bayesian framework.