2015
DOI: 10.1016/j.neucom.2014.11.039
|View full text |Cite
|
Sign up to set email alerts
|

Relaxed conditions for convergence of batch BPAP for feedforward neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…The results show better convergence compared to existing work. Authors in [15] improved the batch BPAP algorithm through their proposed dynamic training rate with a penalty. The structure of the algorithm is 2:2:1, using the sigmoid as the activation function.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The results show better convergence compared to existing work. Authors in [15] improved the batch BPAP algorithm through their proposed dynamic training rate with a penalty. The structure of the algorithm is 2:2:1, using the sigmoid as the activation function.…”
Section: Related Workmentioning
confidence: 99%
“…It has been used successfully in neural network training with a multilayer feed-forward network [1], [2]. The BP algorithm led to a tremendous breakthrough in the application of multilayer perceptions [3].…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, the scaling-proof classifier should not be too complicated and resource-intensive like deep neural networks. According to the universal approximation theorem for feedforward neural networks [13], [14], two-layer perceptron (2LP) could be applied to classify scaled objects. Advantages of 2LP are simplicity and high speed of classification [12], [15], [16].…”
Section: Related Workmentioning
confidence: 99%