2020
DOI: 10.1002/nla.2337
|View full text |Cite
|
Sign up to set email alerts
|

Minimizing convex quadratics with variable precision conjugate gradients

Abstract: We investigate the method of conjugate gradients, exploiting inaccurate matrix-vector products, for the solution of convex quadratic optimization problems. Theoretical performance bounds are derived, and the necessary quantities occurring in the theoretical bounds estimated, leading to a practical algorithm. Numerical experiments suggest that this approach has significant potential, including in the steadily more important context of multiprecision computations.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 30 publications
0
4
0
Order By: Relevance
“…More precisely, preserved local orthogonality seems to be sufficient for the method to advance the approximation with linear convergence -we expect superlinear convergence to be lost, as it occurs in finite precision CG; see, e.g., [24] and references therein. The importance of local orthogonality has been stressed in the past to enhance convergence properties of inexact preconditioned CG, see, e.g., [8], [26], [9] and their references. Similar pictures can also be observed in analyzing round-off effects in the GMRES orthonormal basis -constructed with the modified Gram-Schmidt algorithm, in which the residual norm stagnates at the level where all linear independence is lost [11].…”
Section: Effects Of Truncation In the Cg Recurrence In This Section W...mentioning
confidence: 99%
“…More precisely, preserved local orthogonality seems to be sufficient for the method to advance the approximation with linear convergence -we expect superlinear convergence to be lost, as it occurs in finite precision CG; see, e.g., [24] and references therein. The importance of local orthogonality has been stressed in the past to enhance convergence properties of inexact preconditioned CG, see, e.g., [8], [26], [9] and their references. Similar pictures can also be observed in analyzing round-off effects in the GMRES orthonormal basis -constructed with the modified Gram-Schmidt algorithm, in which the residual norm stagnates at the level where all linear independence is lost [11].…”
Section: Effects Of Truncation In the Cg Recurrence In This Section W...mentioning
confidence: 99%
“…If no more than Kmax iterations are to be performed, we can let φj = K −1/2 max (although more elaborate choices for φj could be considered; see for example [8]).…”
Section: A Strategy For Bounding the η Ijmentioning
confidence: 99%
“…There is already a literature on the use of inexact matrix-vector products in GM-RES and other Krylov subspace methods; see, e.g., [19,6,3,7,8] and the references therein. This work is not a simple extension of such results.…”
Section: Introductionmentioning
confidence: 99%

Exploiting variable precision in GMRES

Gratton,
Simon,
Titley-Peloquin
et al. 2019
Preprint
Self Cite
“…Moreover, in [13] the case in which derivatives are not available was considered. Inexactness of the objective function in optimization problems was addressed in several additional papers in recent years [7,8,9,34,35,39,33]. The objective of the present paper is to use the ideas of [40,11,12] to handle the constrained optimization problem in which the evaluation of the objective functions and the constraints is subject to error.…”
Section: Introductionmentioning
confidence: 99%