2007 International Conference on Industrial and Information Systems 2007
DOI: 10.1109/iciinfs.2007.4579193
|View full text |Cite
|
Sign up to set email alerts
|

Accelerated learning in MLP using adaptive learning rate with momentum coefficient

Abstract: The ability of a neural network to realize some complex nonlinear function makes them attractive for system identification. In the recent past, neural networks trained with back-propagation (BP) learning algorithm have gained attention for the identification of nonlinear dynamic systems. Slower convergence and longer training times are the disadvantages often mentioned when the standard BP algorithm are compared with other competing techniques. In addition, in the standard BP algorithm, the learning rate is fi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…A large value of the learning rate may skip the optimal solution and the convergence may DOI: 10.5281/zenodo.5188523 Received: January 12, 2021 Accepted: July 10, 2021 25 never be achieved. On the other hand, a small learning rate will increase the total time to converge to the optimal value or trap in local minima [20,21]. Therefore, selecting the optimal value of the learning rate is a major challenge regarding the convergence rate.…”
Section: Adaptive Learning Ratementioning
confidence: 99%
“…A large value of the learning rate may skip the optimal solution and the convergence may DOI: 10.5281/zenodo.5188523 Received: January 12, 2021 Accepted: July 10, 2021 25 never be achieved. On the other hand, a small learning rate will increase the total time to converge to the optimal value or trap in local minima [20,21]. Therefore, selecting the optimal value of the learning rate is a major challenge regarding the convergence rate.…”
Section: Adaptive Learning Ratementioning
confidence: 99%
“…Marwala (2007) fulfilled training of the Bayesian NN by using Markov chain-based Monte Carlo technique in genetic programming. Sheell, Varshney, and Varshney (2007) heuristically arranged learning coefficients depending on the sign of the error between the desired and output results. Kathirvalavakumar and Subavathi (2009) improved a modified backpropagation algorithm in neighbourhood-based NNs by replacing fixed LPs with adaptive LPs.…”
Section: Related Workmentioning
confidence: 99%
“…However, a known problem of the GD method is the learning rate selection, which affects the learning speed and stabilization. By appropriately choosing the learning rate, the stability of the controller can be guaranteed 11–14, the stability of the identification can be guaranteed 15–18, and the acceleration rate of the MLPNN learning can be increased 19, 20. In 21, the authors are suggested a stable and optimal learning rate, obtained by the genetic search algorithm.…”
Section: Introductionmentioning
confidence: 99%