2020
DOI: 10.1007/s10489-020-01802-4
|View full text |Cite
|
Sign up to set email alerts
|

EnsPKDE&IncLKDE: a hybrid time series prediction algorithm integrating dynamic ensemble pruning, incremental learning, and kernel density estimation

Abstract: Ensemble pruning can effectively overcome several shortcomings of the classical ensemble learning paradigm, such as the relatively high time and space complexity. However, each predictor has its own unique ability. One predictor may not perform well on some samples, but it will perform very well on other samples. Blindly underestimating the power of specific predictors is unreasonable. Choosing the best predictor set for each query sample is exactly what dynamic ensemble pruning techniques address. This paper … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 44 publications
0
3
0
Order By: Relevance
“…Content may change prior to final publication. f orecasting = f orecasting ∪ {ẑ i t+1 } 15 end 16 return f orecasting dows, composed of a maximum of 20 lags [42], where these lags are selected using the Auto-correlation function (ACF) [43]. Each time series was split into three sequential samples with the following proportion: 50% for training, 25% for validation, and the last 25% observations for testing.…”
Section: Experimental Protocolmentioning
confidence: 99%
See 1 more Smart Citation
“…Content may change prior to final publication. f orecasting = f orecasting ∪ {ẑ i t+1 } 15 end 16 return f orecasting dows, composed of a maximum of 20 lags [42], where these lags are selected using the Auto-correlation function (ACF) [43]. Each time series was split into three sequential samples with the following proportion: 50% for training, 25% for validation, and the last 25% observations for testing.…”
Section: Experimental Protocolmentioning
confidence: 99%
“…Dynamic selection approaches commonly use the performance of the models in the Region of Competence (RoC) as a criterion to select the most competent ones [9], [12], [13], [17]- [19]. The RoC is composed of the k patterns in the in-sample (training or validation sets) [12], which are more similar to the test pattern according to some measure such as the Euclidean distance [20].…”
Section: Introductionmentioning
confidence: 99%
“…Some works focused on the types of trainers, either homogenous [3], [4] or heterogeneous [5], [6]. Some works looked at the purpose of using ensemble learning either for classifying [7]- [26], clustering [27]- [34], regression [35]- [37], or streaming [38]- [42].…”
Section: Introductionmentioning
confidence: 99%