2009
DOI: 10.1109/tnn.2009.2020848
|View full text |Cite
|
Sign up to set email alerts
|

Boundedness and Convergence of Online Gradient Method With Penalty for Feedforward Neural Networks

Abstract: In this brief, we consider an online gradient method with penalty for training feedforward neural networks. Specifically, the penalty is a term proportional to the norm of the weights. Its roles in the method are to control the magnitude of the weights and to improve the generalization performance of the network. By proving that the weights are automatically bounded in the network training with penalty, we simplify the conditions that are required for convergence of online gradient method in literature. A nume… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 66 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…The main reason for this is that the ELM training mechanism is different from those of the BP and RBF networks. The training mechanism of the BP network is as follows [27,28]. In the BP networks, the data go through the input and hidden layers to the output layer and then the output value is obtained.…”
Section: The Discussion Of Training Timementioning
confidence: 99%
“…The main reason for this is that the ELM training mechanism is different from those of the BP and RBF networks. The training mechanism of the BP network is as follows [27,28]. In the BP networks, the data go through the input and hidden layers to the output layer and then the output value is obtained.…”
Section: The Discussion Of Training Timementioning
confidence: 99%
“…They have a wide range of applications across almost all areas of science and engineering. Unlike conventional data processing methods, which require extensive expert knowledge when used in the modelling of electrical machines, neural networks provide a model-free, adaptive, fault-tolerant, parallel, and distributed processing solution [25].…”
Section: Neural Networkmentioning
confidence: 99%
“…The process begins with the receipt of inputs by the input layer neurons, which are processed by each layer sequentially to the output layer. The connections between neurons are unidirectional and neurons in the same layer are unconnected to each other, as shown in Figure 4 [25].…”
Section: Structurementioning
confidence: 99%
“…In this section, the convergence of the proposed fractional-order BP neural network is analyzed. According to previous studies [ 39 42 ], there are four necessary conditions for the convergence of BP neural networks:…”
Section: Convergence Analysismentioning
confidence: 99%