2015
DOI: 10.32894/kujss.2015.104992
|View full text |Cite
|
Sign up to set email alerts
|

New Scaled Conjugate Gradient Algorithm for Training Artificial Neural Networks Based on Pure Conjugacy Condition

Abstract: Conjugate gradient methods constitute excellent neural network training methods characterized by their simplicity efficiency and their very low memory requirements. In this paper, we propose a new scaled conjugate gradient neural network training algorithm which guarantees descent property with standard Wolfe condition. Encouraging numerical experiments verify that the proposed algorithm provides fast and stable convergence.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 13 publications
0
1
0
Order By: Relevance
“…In standard backpropagation, an iteration requires a gradient and an error function to be calculated. Because the problem of minimizing the global error function is very common in other science field, generally in the numerical analysis [30].…”
Section: Figure 6 Nn Structure For Selected Neurons Number At Layersmentioning
confidence: 99%
“…In standard backpropagation, an iteration requires a gradient and an error function to be calculated. Because the problem of minimizing the global error function is very common in other science field, generally in the numerical analysis [30].…”
Section: Figure 6 Nn Structure For Selected Neurons Number At Layersmentioning
confidence: 99%