2018
DOI: 10.1002/qj.3262
|View full text |Cite
|
Sign up to set email alerts
|

A note on preconditioning weighted linear least‐squares, with consequences for weakly constrained variational data assimilation

Abstract: The effect of preconditioning linear weighted least‐squares using an approximation of the model matrix is analyzed. The aim is to investigate from a theoretical point of view the inefficiencies of this approach as observed in the application of the weakly constrained 4D‐Var algorithm in geosciences. Bounds on the eigenvalues of the preconditioned system matrix are provided. It highlights the interplay of the eigenstructures of both the model and weighting matrices: maintaining a low bound on the eigenvalues of… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(13 citation statements)
references
References 16 publications
0
13
0
Order By: Relevance
“…M i and H i are the model and observation operators linearised at x i ; they are known as the tangent linear model and tangent linear observation operator, respectively. We define the following matrices (following Gratton et al, 2018a)…”
Section: Incremental Weak Constraint 4d-varmentioning
confidence: 99%
See 2 more Smart Citations
“…M i and H i are the model and observation operators linearised at x i ; they are known as the tangent linear model and tangent linear observation operator, respectively. We define the following matrices (following Gratton et al, 2018a)…”
Section: Incremental Weak Constraint 4d-varmentioning
confidence: 99%
“…Fisher and Gürol could not find a suitable approximation that would guarantee good convergence. Gratton et al, (2018aGratton et al, ( ,2018b…”
Section: Preconditioningmentioning
confidence: 99%
See 1 more Smart Citation
“…, R N ) ∈ R q ×q , where q = Σ N i =0 q i . We use the notation (following Gratton et al (2018)) +1) , where H (j ) i is the linearised observation operator, and…”
Section: Accepted Articlementioning
confidence: 99%
“…A vast amount of literature on saddle point problems and their solution via Krylov methods and preconditioners is available, see for example [21] and references therein. For this particular saddle point problem low-rank limited memory preconditioners exploiting the structure of the saddle point problem were proposed and analyzed in [73] (see also [91]). In the work [80,97] the Kronecker structure of the saddle point problem was used in order to compute low-rank solutions to GMRES.…”
Section: Weak Constraint 4d-var and Saddle Point Formulation Of The Imentioning
confidence: 99%