2021
DOI: 10.1016/j.cam.2020.113033
|View full text |Cite
|
Sign up to set email alerts
|

Fast gradient methods with alignment for symmetric linear systems without using Cauchy step

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 33 publications
0
6
0
Order By: Relevance
“…We remark that there also exist a variety of modifications and extensions of the SD method and the BB method in the literature, including gradient methods with retards [19], gradient methods with alignment [15,16], alternate step gradient methods [6,10,14], cyclic SD method [9], cyclic BB method [11], adaptive BB method [18], limited memory gradient method [5] etc; see [3,12,20,23,31,32] and references therein for other researches on the choices of the stepsize k in the gradient method (2).…”
Section: Introductionmentioning
confidence: 99%
“…We remark that there also exist a variety of modifications and extensions of the SD method and the BB method in the literature, including gradient methods with retards [19], gradient methods with alignment [15,16], alternate step gradient methods [6,10,14], cyclic SD method [9], cyclic BB method [11], adaptive BB method [18], limited memory gradient method [5] etc; see [3,12,20,23,31,32] and references therein for other researches on the choices of the stepsize k in the gradient method (2).…”
Section: Introductionmentioning
confidence: 99%
“…It is proved that the proposed method is R-linearly convergent with the rate of 1 − 1/κ, where κ = λ n /λ 1 is the condition number of A. Our numerical comparisons with the BB1 [2], DY [14], SL (Alg.1 in [35]), ABBmin2 [21], SDC [16], and MGC [39] methods for solving unconstrained random and non-random quadratic optimization demonstrate that the proposed method is very efficient. Further, numerical experiments on quadratic problems whose Hessians are chosen from the SuiteSparse Matrix Collection [15] suggest that the proposed method is very competitive with the above methods.…”
Section: Introductionmentioning
confidence: 81%
“…Based on the above analysis, we can develop gradient method using αk and α BB1 k in an adaptive way. Notice that reusing the retard short stepsize for some iterations could reduce the computational cost and yield better performance, see [16,27,35,37,39]. So, we suggest to combine the adaptive and cyclic schemes with α BB1 k and αk−1 .…”
Section: ⊓ ⊔mentioning
confidence: 99%
See 1 more Smart Citation
“…Successful applications of the BB gradient method and its variants have been found in sparse reconstruction [39,41], nonnegative matrix factorization [32], optimization on manifolds [22,33,34,40], machine learning [11,38,42], etc. Since optimization problems arising in different areas are often in large scale, BB-like methods have attracted increasingly attention in recent years due to their simplicity, numerical efficiency and low per-iteration cost, see [9,12,13,17,19,21,23,36,44,46,47] and references therein.…”
Section: Introductionmentioning
confidence: 99%