2016 International Conference on Computing, Communication and Automation (ICCCA) 2016
DOI: 10.1109/ccaa.2016.7813734
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of weight initialization routines for conjugate gradient training algorithm with Fletcher-Reeves updates

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…This algorithm requires more iterations to converge rather than the other CG algorithms; however, the number of computations in each step is significantly reduced as no line search is performed [19]. CGF is an updated version of CG which computes new search direction as the ratio of the norm squared of the current gradient to the norm squared of the previous gradient [26][27][28]. CGP calculates new search direction as the ration of the inner product of the previous change in the gradient with the current gradient divided by the norm squared of the previous gradient [9,28].…”
Section: Resultsmentioning
confidence: 99%
“…This algorithm requires more iterations to converge rather than the other CG algorithms; however, the number of computations in each step is significantly reduced as no line search is performed [19]. CGF is an updated version of CG which computes new search direction as the ratio of the norm squared of the current gradient to the norm squared of the previous gradient [26][27][28]. CGP calculates new search direction as the ration of the inner product of the previous change in the gradient with the current gradient divided by the norm squared of the previous gradient [9,28].…”
Section: Resultsmentioning
confidence: 99%
“…Optimal performance was achieved at the backward process by adapting the learning rate parameter capable of affecting the value of the weighting and reduced the number epoch [16]. Furthermore, when the neural network is trained with conjugate gradient training algorithm having Fletcher-Reeves update, Nguyen-Widrow algorithm converge faster and also generalize better than other weight initialization technique [17]. Nguyen-Widrow weight algorithm was also applied on image compression using multilayer feed-forward artificial neural network.…”
Section: Introductionmentioning
confidence: 99%