IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)
DOI: 10.1109/ijcnn.2001.938431
|View full text |Cite
|
Sign up to set email alerts
|

An algorithm for fast convergence in training neural networks

Abstract: In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are studied. One modification is made on performance index, while the other one is on calculating gradient information. The modified algorithm gives a better convergence rate compared to the standard Levenberg-Marquard (LM) method and is less computationally intensive and requires less memory. The performance of the algorithm has been checked on several example problems.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
87
0
5

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 124 publications
(92 citation statements)
references
References 13 publications
0
87
0
5
Order By: Relevance
“…Bayes' rule automatically regulates the network's performance for smoother running to avoid any overfitting. Further discussion on LM algorithm and Bayesian regularisation is beyond the scope of this article, the interested reader may find detailed description in the literature [17][18][19][20].…”
Section: Predicting Yarn Evenness By Annmentioning
confidence: 99%
“…Bayes' rule automatically regulates the network's performance for smoother running to avoid any overfitting. Further discussion on LM algorithm and Bayesian regularisation is beyond the scope of this article, the interested reader may find detailed description in the literature [17][18][19][20].…”
Section: Predicting Yarn Evenness By Annmentioning
confidence: 99%
“…Artificial neural networks have large numbers of computational units called neurons, connected in a massively parallel structure and do not need an explicit formulation of the mathematical or physical relationships of the handled problem [5,6,[8][9][10][11]. The most commonly used ANNs are the feed-forward neural networks [11], which are designed with one input layer, one output layer and hidden layers [8][9][10].…”
Section: Artificial Neural Networkmentioning
confidence: 99%
“…The optimization method chosen in this work was the Levenberg-Marquart algorithm [5][6][7], as mentioned earlier. The testing set is used during the adjustment of the network's synaptic weights to evaluate the algorithms performance on the data not used for tuning and stop the tuning if the error on the testing set increases.…”
Section: Artificial Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations