2015
DOI: 10.1137/140977308
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Low-rank Approximations of Bayesian Linear Inverse Problems

Abstract: Abstract. In the Bayesian approach to inverse problems, data are often informative, relative to the prior, only on a low-dimensional subspace of the parameter space. Significant computational savings can be achieved by using this subspace to characterize and approximate the posterior distribution of the parameters. We first investigate approximation of the posterior covariance matrix as a low-rank update of the prior covariance matrix. We prove optimality of a particular update, based on the leading eigendirec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
223
0
4

Year Published

2016
2016
2020
2020

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 115 publications
(231 citation statements)
references
References 73 publications
4
223
0
4
Order By: Relevance
“…The method, based on synthesis and advancement of recent work in this area (Flath et al, 2011;Bui-Thanh et al, 2012;Spantini et al, 2015) by Bousserez and Henze (2017), uses an optimal low-rank projection of the inverse problem that maximizes the observational constraints. Specifically, for a given dimension k, the optimal reduced space (Spantini et al, 2015;Bousserez and Henze, 2017) is spanned by the first k eigenvectors of the prior-preconditioned Hessian G (Flath et al, 2011):…”
Section: Svd-based Inversionmentioning
confidence: 99%
See 1 more Smart Citation
“…The method, based on synthesis and advancement of recent work in this area (Flath et al, 2011;Bui-Thanh et al, 2012;Spantini et al, 2015) by Bousserez and Henze (2017), uses an optimal low-rank projection of the inverse problem that maximizes the observational constraints. Specifically, for a given dimension k, the optimal reduced space (Spantini et al, 2015;Bousserez and Henze, 2017) is spanned by the first k eigenvectors of the prior-preconditioned Hessian G (Flath et al, 2011):…”
Section: Svd-based Inversionmentioning
confidence: 99%
“…The trace of the averaging kernel gives the total degrees of freedom, i.e., the number of independent pieces of information that can be obtained in the inversion framework. The posterior mean estimate of x can also be directly calculated from analytical formulas using the eigenvectors of G (Spantini et al, 2015;Bouserez and Henze, 2017). However, to impose a positivity constraint on the emissions, we rely here on the variational minimization framework as in the standard 4D-Var case.…”
Section: Svd-based Inversionmentioning
confidence: 99%
“…A further source of low dimensionality in transports is low-rank structure, i.e., situations where a map departs from the identity only on a low-dimensional subspace of the input space [78]. This situation is fairly common in large-scale Bayesian inverse problems where the data are informative, relative to the prior, only about a handful of directions in the parameter space [25,79].…”
Section: Discussionmentioning
confidence: 99%
“…In Section 4.3, we will emphasize that such a choice for M prior allows for an easy implementation of the sampling of M prior ∩ H h . In order to show that (15) can (for example) be obtained from standard model-order reduction techniques, let us first consider the case where the uncertainty one has on the set of possible solutions of the PPDE (i.e., M) is due to an imperfect knowledge of the set of feasible parameters Θ (The general case will be discussed at the end of this section). Although Θ may not be precisely known, an information the practitioner usually has at its disposal is that Θ is contained in some larger set Θ relax , i.e., Θ ⊆ Θ relax .…”
Section: Some Specific Choices For M Priormentioning
confidence: 99%
“…This approach was recently used in the literature. [11][12][13][14][15] Maday et al suggested to iteratively enrich the approximation subspace by using a posteriori estimates of some elements of  (the term "a posteriori" refers here to the fact that the estimates stem from the combination of partial observations and some prior knowledge on ). 11 In a previous work, 12 the authors of the present work refined this approach in a Bayesian framework: they proposed to include the uncertainty inherent to the a posteriori estimates in the reduction process.…”
Section: Introductionmentioning
confidence: 99%