2011
DOI: 10.1007/s11590-011-0355-6
|View full text |Cite
|
Sign up to set email alerts
|

A new variant of the memory gradient method for unconstrained optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…where the last inequality holds because the definition of ψ(•, •) as in (5). Thus, by (34), (35) and the definitions of α k given in (24), we have…”
Section: Convergence Analysismentioning
confidence: 89%
See 1 more Smart Citation
“…where the last inequality holds because the definition of ψ(•, •) as in (5). Thus, by (34), (35) and the definitions of α k given in (24), we have…”
Section: Convergence Analysismentioning
confidence: 89%
“…Narushima and Yabe [33] introduced a new memory gradient method that also uses historical direction information and then derived the global convergence of the method under appropriate conditions. Other methods that use historical iterative information at the current step to improve the algorithmic performance have been reported in [34][35][36][37]. In summary, it would be a good choice to design new algorithms based on historical iterative information in scalar optimization.…”
Section: Introductionmentioning
confidence: 99%