Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94) 1994
DOI: 10.1109/icnn.1994.374171
|View full text |Cite
|
Sign up to set email alerts
|

A first order adaptive learning rate algorithm for backpropagation networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2008
2008
2014
2014

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 1 publication
0
4
0
Order By: Relevance
“…Obtained results regarding the stability and the faster convergence of algorithm (1) can play the key role in determining the effective η and μ at every stage; that is, in the case ofx = x t happening at every stage t. Performing the local quadratic approximation of the error function (16), effective η 0 t and μ 0 t coefficients can be calculated by formulas (10) and (11) using the smallest and the largest eigenvalues of the Hessian matrix (17) at every stage. This idea provides the dynamic selection of effective η t and μ t parameters in algorithm (15), and this fairly affects the convergence speed of the algorithm, as the experimental results demonstrate in section 4.…”
Section: Accelerated Backpropagation With Effective Parametersmentioning
confidence: 90%
“…Obtained results regarding the stability and the faster convergence of algorithm (1) can play the key role in determining the effective η and μ at every stage; that is, in the case ofx = x t happening at every stage t. Performing the local quadratic approximation of the error function (16), effective η 0 t and μ 0 t coefficients can be calculated by formulas (10) and (11) using the smallest and the largest eigenvalues of the Hessian matrix (17) at every stage. This idea provides the dynamic selection of effective η t and μ t parameters in algorithm (15), and this fairly affects the convergence speed of the algorithm, as the experimental results demonstrate in section 4.…”
Section: Accelerated Backpropagation With Effective Parametersmentioning
confidence: 90%
“…Second order training algorithms are of different class. Other derivative based ANN application-training algorithms are discussed in Wang and Lin (1998), Van Ooyen and Nienhuis (1992), Towsey et al (1995), Subramanian and Hung (1990) and Nachtsheim (1994). Dissimilarity should therefore be made to exhibit the computational complexity between derivative free, first and second order derivative based training methods.…”
Section: Derivative Based Training Methodsmentioning
confidence: 99%
“…The positive constant of , which is selected by user, is called the learning rate, where (0,1). b. Dynamically adjusting the learning rate, either commonly for all weights [21], [26] or separately for each weight [24]. (2) Dynamical adaptation of the weight adjustments expressed by the vector k d i.e.…”
Section: ………………………………………………………………(2)mentioning
confidence: 99%
“…Consider any method of the form (4) and (5) where the learning rate k  satisfies the standard Wolfe conditions equation (9) and (10) and the search direction computed by the equation (21) and (22), then for…”
Section: Theoremmentioning
confidence: 99%