2020
DOI: 10.48550/arxiv.2006.10818
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Kaczmarz-type inner-iteration preconditioned flexible GMRES methods for consistent linear systems

Abstract: We propose using greedy and randomized Kaczmarz inner-iterations as preconditioners for the right preconditioned flexible GMRES method to solve consistent linear systems, with a parameter tuning strategy for adjusting the number of inner iterations and the relaxation parameter. We also present theoretical justifications of the right-preconditioned flexible GMRES for solving consistent linear systems. Numerical experiments on overdetermined and underdetermined linear systems show that the proposed method is sup… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 39 publications
0
1
0
Order By: Relevance
“…Iterative linear solvers are often preferred for solving large-scale linear systems, as they can take advantage of problem structure such as sparsity or bandedness, require inexpensive floating point operations, and can be readily paired with preconditioning techniques [19, see preface]. While such iterative linear solvers as Conjugate Gradients (CG) and the Generalized Minimal Residual method (GM-RES) are still dominant solvers in practice, randomized row-action [8,1,14,23] and column-action iterative solvers [10,25] have been growing in interest for several reasons: they (usually) require very few floating point operations per iteration [5,3]; they have low-memory footprints [9]; they can readily be composed with randomization techniques to quickly produce approximate solutions [23,10,24,6,11,2,7,17]; they can be used for solving systems constructed in a streaming fashion (e.g., [15]), which supports emerging computing paradigms (e.g., [13]); and, just like the more popular iterative Krylov solvers, they can be parallelized, preconditioned or combined with other linear solvers [20,16,4,18];…”
Section: Introductionmentioning
confidence: 99%
“…Iterative linear solvers are often preferred for solving large-scale linear systems, as they can take advantage of problem structure such as sparsity or bandedness, require inexpensive floating point operations, and can be readily paired with preconditioning techniques [19, see preface]. While such iterative linear solvers as Conjugate Gradients (CG) and the Generalized Minimal Residual method (GM-RES) are still dominant solvers in practice, randomized row-action [8,1,14,23] and column-action iterative solvers [10,25] have been growing in interest for several reasons: they (usually) require very few floating point operations per iteration [5,3]; they have low-memory footprints [9]; they can readily be composed with randomization techniques to quickly produce approximate solutions [23,10,24,6,11,2,7,17]; they can be used for solving systems constructed in a streaming fashion (e.g., [15]), which supports emerging computing paradigms (e.g., [13]); and, just like the more popular iterative Krylov solvers, they can be parallelized, preconditioned or combined with other linear solvers [20,16,4,18];…”
Section: Introductionmentioning
confidence: 99%