2015
DOI: 10.31642/jokmc/2018/020309
|View full text |Cite
|
Sign up to set email alerts
|

Improved Three-term Conjugate Gradient Algorithm For Training Neural Network

Abstract: A new three-term conjugate gradient algorithm for training feed-forward neural networks is developed. It is a vector based training algorithm derived from DFP quasi-Newton and has only O(n) memory. The global convergence to the proposed algorithm has been established for convex function under Wolfe condition. The results of numerical experiments are included and compared with other well known training algorithms in this field.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 7 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?