2011
DOI: 10.1016/j.neunet.2010.09.007
|View full text |Cite
|
Sign up to set email alerts
|

Convergence analysis of online gradient method for BP neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
37
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 145 publications
(37 citation statements)
references
References 19 publications
0
37
0
Order By: Relevance
“…Assumption (A1) means the condition on boundedness of ∥w k ∥, which is often used in the literature (Aizenberg, 2010(Aizenberg, , 2011Gori & Maggini, 1996;Shao & Zheng, 2011;Wu et al, 2005Wu et al, , 2011Xu et al, 2010), and can be removed when adding a penalty term to the error function . Assumption (A2) indicates that complex coefficient polynomials can be used as the activation function, which removes the Schwarz symmetry condition in Adali et al (2008), Leung and Haykin (1991) and Zhang, Liu et al (2014).…”
Section: Convergence Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Assumption (A1) means the condition on boundedness of ∥w k ∥, which is often used in the literature (Aizenberg, 2010(Aizenberg, , 2011Gori & Maggini, 1996;Shao & Zheng, 2011;Wu et al, 2005Wu et al, , 2011Xu et al, 2010), and can be removed when adding a penalty term to the error function . Assumption (A2) indicates that complex coefficient polynomials can be used as the activation function, which removes the Schwarz symmetry condition in Adali et al (2008), Leung and Haykin (1991) and Zhang, Liu et al (2014).…”
Section: Convergence Analysismentioning
confidence: 99%
“…Convergence of the real-valued learning algorithm has been widely studied (Shao & Zheng, 2011;Wang, Yang, & Wu, 2011;Wu, Fan, & Zurada, 2014;Wu, Feng, Li, & Xu, 2005;Wu, Wang, Cheng, & Li, 2011). However, in the complex domain, in addition to the conflict between boundedness and analyticity of the activation function, another challenge is that the traditional mean value theorem does not hold in the complex domain (e.g., f (z) = e z with…”
Section: Introductionmentioning
confidence: 99%
“…The traditional BPNN frame [12][13][14] often consists of input layer, hidden layer, and output layer. It sometimes contains more than one hidden layer; the outputs of each layer are sent directly to each neuron of the next layer [15]; it also may contain a bias neuron that produces constant outputs but receives no inputs.…”
Section: Network Framementioning
confidence: 99%
“…Artificial neural network (ANN) is a mathematical model for biomimetic neural network [6][7][8]. It can effectively deal with tangible and intangible information during mode recognition through sample training, in particular dealing with the multilayer feed-forward network (or BP (Back Propagation) neural network [9][10][11][12] through error back propagation training. The multilayer feed-forward network is widely applied in engineering and has provided new solution to facial image recognition featured by massive data, multiple factors and multi-features.…”
Section: Introductionmentioning
confidence: 99%