2011
DOI: 10.1016/j.patrec.2011.07.016
|View full text |Cite
|
Sign up to set email alerts
|

Regularized online sequential learning algorithm for single-hidden layer feedforward neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
46
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 103 publications
(52 citation statements)
references
References 15 publications
0
46
0
Order By: Relevance
“…Once the singular or ill-posed problem occurs, the generalization performance of OSELM will deteriorate significantly. In order to overcome this problem, Huynh and Won [16] proposed a regularized OSELM (R-OSELM) by using the Tikhonov regularization. The learning procedure of the R-OSELM is almost the same as the OSELM, just adding a regularization item to the autocorrelation matrix H 푇 H to avoid the singular or ill-posed problem so as to improve the stability of the algorithm.…”
Section: R-oselmmentioning
confidence: 99%
See 1 more Smart Citation
“…Once the singular or ill-posed problem occurs, the generalization performance of OSELM will deteriorate significantly. In order to overcome this problem, Huynh and Won [16] proposed a regularized OSELM (R-OSELM) by using the Tikhonov regularization. The learning procedure of the R-OSELM is almost the same as the OSELM, just adding a regularization item to the autocorrelation matrix H 푇 H to avoid the singular or ill-posed problem so as to improve the stability of the algorithm.…”
Section: R-oselmmentioning
confidence: 99%
“…Despite an excellent online learning algorithm, the OSELM may still suffer from a drawback of instability due to the potential ill-conditioned matrix inversion, and its stability and generalization performance could be greatly influenced once the autocorrelation matrix of the hidden layer output matrix is singular or ill-conditioning. Regularization technique is an effective way to cope with the ill-posed problem, and a regularized OSELM (R-OSELM) based on the biobjective optimization with Tikhonov regularization was proposed in [16]. The R-OSELM successfully overcomes the potential ill-posed problem and tends to provide good generalization performance and stability, and it has become a practical online modeling method in real applications.…”
Section: Introductionmentioning
confidence: 99%
“…It has been reported that ELM might run into an ill-posed problem when it is over parameterized or when not properly initialized [11], [12], [13], [14]. A few attempts have been made to improve the regularization behavior or ELM [13] and OS-ELM [14].…”
Section: Stability Advantage Of L-elmmentioning
confidence: 99%
“…A few attempts have been made to improve the regularization behavior or ELM [13] and OS-ELM [14]. However, when the data is being processed 1-by-1 (as in the case of system identification), the regularization improvement suggested by [14] was not found to improve the situation. Such an unstable parametric evolution can cause fatal problems when such online models are used in decision making.…”
Section: Stability Advantage Of L-elmmentioning
confidence: 99%
See 1 more Smart Citation