2017
DOI: 10.1007/s00521-017-3096-3
|View full text |Cite
|
Sign up to set email alerts
|

An improved kernel-based incremental extreme learning machine with fixed budget for nonstationary time series prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 35 publications
0
6
0
Order By: Relevance
“…They can learn data one by one or piece by piece. Besides, unlike the former two algorithms, the proposed EnsP KDE &IncL KDE algorithm is designed based upon the dynamic ensemble learning paradigm, and a novel frame of incremental learning is constructed and incorporated into EnsP KDE &IncL KDE , therefore, two incremental learning algorithms, i.e., Kernel based Incremental ELM (KB-IELM) [34] and Online Incremental SVR (OI-SVR) [35], are also employed as the comparison algorithms. Finally, for comparison with deep learning algorithms, the Long Short Term Memory (LSTM) network [30] commonly used in TSP is utilized as another comparison algorithm.…”
Section: Methodsmentioning
confidence: 99%
“…They can learn data one by one or piece by piece. Besides, unlike the former two algorithms, the proposed EnsP KDE &IncL KDE algorithm is designed based upon the dynamic ensemble learning paradigm, and a novel frame of incremental learning is constructed and incorporated into EnsP KDE &IncL KDE , therefore, two incremental learning algorithms, i.e., Kernel based Incremental ELM (KB-IELM) [34] and Online Incremental SVR (OI-SVR) [35], are also employed as the comparison algorithms. Finally, for comparison with deep learning algorithms, the Long Short Term Memory (LSTM) network [30] commonly used in TSP is utilized as another comparison algorithm.…”
Section: Methodsmentioning
confidence: 99%
“…The typical online classification algorithms include TSVDD [13] , eAdaBoost [14] , KB-IELM [15] , OS-ELM [16] , etc.…”
Section: The Related Research Work and Algorithmsmentioning
confidence: 99%
“…However, over time the distribution and changing trend of online data change [33], which put forward new model requirements. Zhang Wei et al [34] introduced an adaptive regularization factor to resolve the structural risk of the model in different nonlinear regions. Updating the kernel weight coefficient [35] is carried out additionally to improve the identification accuracy.…”
Section: Introductionmentioning
confidence: 99%
“…KB-IELM[23], (2)ALD-KOS-ELM[20], (3)NOS-KELM[34], (4)FF-OSKELM[22] (fixed forgetting factor which 01   ). Here OS-ELM methods are not considered for that the randomness of the initial weight setting of ELM input layer can lead to the randomness of the experimental results[19].…”
mentioning
confidence: 99%
See 1 more Smart Citation