2009
DOI: 10.4156/jcit.vol4.issue1.shanthi
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Neural Network Training Algorithms for the prediction of the patient’s post-operative recovery area

Abstract: An Artificial Neural Network(ANN) is a well known universal approximator to model smooth and continuous functions. ANNs operate in two stages: learning and generalization. Learning of a neural network is to approximate the behavior of the training data while generalization is the ability to predict well beyond the training data. In order to have a good learning and generalization ability , a good training algorithm is needed. Training a neural network can be treated as a nonlinear mathematical optimization pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
6
0

Year Published

2011
2011
2017
2017

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 13 publications
1
6
0
Order By: Relevance
“…The former algorithm frequently produced lower testing set errors and higher R 2 values than the latter. This finding agrees with empirical comparisons in the literature which evidenced that the QP training algorithm is one of the fastest, most reliable algorithms and that it outperforms the majority of the rest heuristic variants of the BP algorithm on a broad range of modeling problems (Fahlman, 1988;Shanthi et al, 2009). Consequently, all networks were trained with the standard QP algorithm due to its fast convergence and stability besides its better performance on the studied data than the BP algorithm.…”
Section: Optimization Of Network Structuresupporting
confidence: 82%
“…The former algorithm frequently produced lower testing set errors and higher R 2 values than the latter. This finding agrees with empirical comparisons in the literature which evidenced that the QP training algorithm is one of the fastest, most reliable algorithms and that it outperforms the majority of the rest heuristic variants of the BP algorithm on a broad range of modeling problems (Fahlman, 1988;Shanthi et al, 2009). Consequently, all networks were trained with the standard QP algorithm due to its fast convergence and stability besides its better performance on the studied data than the BP algorithm.…”
Section: Optimization Of Network Structuresupporting
confidence: 82%
“…Newton's method computes the second order derivatives of Taylor's approximation as hessian matrix and therefore finds out the point of minimum much faster than first order methods (Shanthi et al, 2009;Yu and Wilamowski, 2012). This utilizes the curvature information to search a more direct route towards the point of minima.…”
Section: Literature Reviewmentioning
confidence: 99%
“…For instance, SCG was chosen because of its modest memory requirements with high accuracy and speed due to inexpensive calculation of the gradient information [33]. Similarly, the CONJGRAD training algorithm was chosen because it is also known to be a fast training algorithm with numerical efficiency and very low memory requirement [34].…”
Section: Amr Pattern Recognition Approach Second Subsystemmentioning
confidence: 99%