2019
DOI: 10.1007/s11075-019-00658-1
|View full text |Cite
|
Sign up to set email alerts
|

Two–parameter scaled memoryless BFGS methods with a nonmonotone choice for the initial step length

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(16 citation statements)
references
References 26 publications
0
16
0
Order By: Relevance
“…To make the BFGS methods applicable to large scale optimization models, many memoryless BFGS updating formulas have been proposed. For more details, one can see [3], [25]- [32] and the references therein. For example, Livieris et al [25] presented a new hybrid conjugate gradient method based on convex hybridization of the conjugate parameters of DY and HS+ by adapting the quasi-Newton philosophy.…”
Section: Literature Reviewmentioning
confidence: 99%
See 4 more Smart Citations
“…To make the BFGS methods applicable to large scale optimization models, many memoryless BFGS updating formulas have been proposed. For more details, one can see [3], [25]- [32] and the references therein. For example, Livieris et al [25] presented a new hybrid conjugate gradient method based on convex hybridization of the conjugate parameters of DY and HS+ by adapting the quasi-Newton philosophy.…”
Section: Literature Reviewmentioning
confidence: 99%
“…It is noted that Babaie-Kafaki [3], [31] started with controlling the condition number of the BFGS update matrix to develop more efficient algorithms. Actually, it was shown that the algorithms in [3], [31] are more stable compared with the similar ones.…”
Section: Literature Reviewmentioning
confidence: 99%
See 3 more Smart Citations