Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)
DOI: 10.1109/icnn.1994.374284
|View full text |Cite
|
Sign up to set email alerts
|

Optimization schemes for neural network training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 7 publications
0
1
0
Order By: Relevance
“…The commonest and best-understood approach for off-line supervised learning is the gradient descent method: back-propagation [19]. Also, its computational cost is lower than the other methods (i.e., Newton, conjugate gradient) which leads to a faster convergence [20]. The weight correction, ) n ( w ji ∆ applied to the weights connecting neuron i to neuron j is defined by the delta rule:…”
Section: Supervised Learningmentioning
confidence: 99%
“…The commonest and best-understood approach for off-line supervised learning is the gradient descent method: back-propagation [19]. Also, its computational cost is lower than the other methods (i.e., Newton, conjugate gradient) which leads to a faster convergence [20]. The weight correction, ) n ( w ji ∆ applied to the weights connecting neuron i to neuron j is defined by the delta rule:…”
Section: Supervised Learningmentioning
confidence: 99%