2019
DOI: 10.1214/19-ba1145
|View full text |Cite
|
Sign up to set email alerts
|

A Bayesian Conjugate Gradient Method (with Discussion)

Abstract: A fundamental task in numerical computation is the solution of large linear systems. The conjugate gradient method is an iterative method which offers rapid convergence to the solution, particularly when an effective preconditioner is employed. However, for more challenging systems a substantial error can be present even after many iterations have been performed. The estimates obtained in this case are of little value unless further information can be provided about, for example, the magnitude of the error. In… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
62
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 23 publications
(63 citation statements)
references
References 42 publications
1
62
0
Order By: Relevance
“…This corollary suggests a natural way to select a prior covariance still linked to the linear system, though this choice is still not computationally convenient. Furthermore, in the case that A is symmetric positive-definite, this recovers the prior which replicates CG described in Cockayne et al [2018]. Note that each of H and P can be stated explicitly as H = (A A) 1 2 and P = A(A A) − 1 2 .…”
Section: Probabilistic Perspectivessupporting
confidence: 54%
See 4 more Smart Citations
“…This corollary suggests a natural way to select a prior covariance still linked to the linear system, though this choice is still not computationally convenient. Furthermore, in the case that A is symmetric positive-definite, this recovers the prior which replicates CG described in Cockayne et al [2018]. Note that each of H and P can be stated explicitly as H = (A A) 1 2 and P = A(A A) − 1 2 .…”
Section: Probabilistic Perspectivessupporting
confidence: 54%
“…Conjugate gradients has been studied from a probabilistic point of view before by Hennig [2015] and Cockayne et al [2018]. This section generalizes the results of Hennig [2015] and leverages Proposition 6 for new insights on BayesCG.…”
Section: Conjugate Gradientsmentioning
confidence: 61%
See 3 more Smart Citations