2017
DOI: 10.1016/j.asoc.2017.07.059
|View full text |Cite
|
Sign up to set email alerts
|

Convergence analysis of BP neural networks via sparse response regularization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 45 publications
(9 citation statements)
references
References 24 publications
0
9
0
Order By: Relevance
“…The ACO is a positive feedback mechanism. Therefore, based on this background, this paper uses ACO to train the parameters and structure of the feed-forward neural network and offers certain application value [13], [14].…”
Section: Related Workmentioning
confidence: 99%
“…The ACO is a positive feedback mechanism. Therefore, based on this background, this paper uses ACO to train the parameters and structure of the feed-forward neural network and offers certain application value [13], [14].…”
Section: Related Workmentioning
confidence: 99%
“…Aimed at BP neural network can lead to poor generalization and slow convergence. The author has developed a variety of sparse response BP algorithm, effectively improve the generalization performance as stated in [21]. Some scholar introduced a new model developed by the Real Estate Valuation Center at the Polytechnic of Milan and validated the model over artificial neural networks using real estate data from Italy as stated in [22].…”
Section: Artificial Neural Network Model Evolution and Algorithm Improvementmentioning
confidence: 99%
“…On the basis of avoiding the overfitting problem of neural network algorithm and ensuring its generalization ability, the efficiency of neural network algorithm becomes the focus of optimization algorithm and its application in spectral analysis . In contrast, BP neural networks are generally considered to have advantages in improving algorithm efficiency . For the BP neural network, Luo Liqiang proposed to use the tapping technology to optimize the model parameters, thereby reducing training time and improved the efficiency of the algorithm by using the single‐component prediction method based on BEP.…”
Section: Neural Network Algorithm Efficiencymentioning
confidence: 99%