2012
DOI: 10.1016/j.neucom.2012.02.029
|View full text |Cite
|
Sign up to set email alerts
|

Boundedness and convergence of batch back-propagation algorithm with penalty for feedforward neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 50 publications
(22 citation statements)
references
References 18 publications
0
21
0
Order By: Relevance
“…In recent years, neural networks have become a very useful tool in the modeling of complicated systems because they have an excellent ability learn and to generalize (interpolate) the complicated relationships between input and output variables. Also, the ANNs behave as model free estimators; that is, they can capture and model complex input-output relations without the help of a mathematical model [15]. In other words, training neural networks, for example, eliminates the need for explicit mathematical modeling or similar system analysis.…”
Section: Overview Of Neural Networkmentioning
confidence: 99%
“…In recent years, neural networks have become a very useful tool in the modeling of complicated systems because they have an excellent ability learn and to generalize (interpolate) the complicated relationships between input and output variables. Also, the ANNs behave as model free estimators; that is, they can capture and model complex input-output relations without the help of a mathematical model [15]. In other words, training neural networks, for example, eliminates the need for explicit mathematical modeling or similar system analysis.…”
Section: Overview Of Neural Networkmentioning
confidence: 99%
“…The experimental results show that NBPNN gave a more accurate result than the BP algorithm. In [14] a specific penalty to obtain the proportion of the norm of the weight or to prove the boundedness of the weights in the network training process is presented. The learning rate is set by an equation to be a small constant or an adaptive series.…”
Section: Related Workmentioning
confidence: 99%
“…The gradient algorithm is bounded and convergent naturally (Zhang et al, 2012). Because of the high nonlinearity of neural network, this optimal step size was difficult to find when we applied gradient method.…”
Section: Batch Back-propagation Algorithmmentioning
confidence: 99%