2013
DOI: 10.1007/s12190-013-0747-0
|View full text |Cite
|
Sign up to set email alerts
|

A nonmonotone supermemory gradient algorithm for unconstrained optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 40 publications
0
4
0
Order By: Relevance
“…Notice that this choice of d k is derived from [17]. In what follows we discuss the global convergence and the convergence rate of Algorithm 5.1.…”
Section: Further Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Notice that this choice of d k is derived from [17]. In what follows we discuss the global convergence and the convergence rate of Algorithm 5.1.…”
Section: Further Discussionmentioning
confidence: 99%
“…In what follows we discuss the global convergence and the convergence rate of Algorithm 5.1. To this end, we give the following two properties, whose proofs are essentially the same as Lemmas 1 and 2 in [17], and thus are omitted here.…”
Section: Further Discussionmentioning
confidence: 99%
“…The connection weight matrixes of 1 , 2 , 3 , and 4 can be updated according to the gradient descent method, which can be expressed as follows [29,30]:…”
Section: Improved Elman Neuralmentioning
confidence: 99%
“…Narushima and Yabe [33] introduced a new memory gradient method that also uses historical direction information and then derived the global convergence of the method under appropriate conditions. Other methods that use historical iterative information at the current step to improve the algorithmic performance have been reported in [34][35][36][37]. In summary, it would be a good choice to design new algorithms based on historical iterative information in scalar optimization.…”
Section: Introductionmentioning
confidence: 99%