2019
DOI: 10.1109/tcyb.2017.2774266
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid Incremental Ensemble Learning for Noisy Real-World Data Classification

Abstract: Traditional ensemble learning approaches explore the feature space and the sample space, respectively, which will prevent them to construct more powerful learning models for noisy real-world dataset classification. The random subspace method only search for the selection of features. Meanwhile, the bagging approach only search for the selection of samples. To overcome these limitations, we propose the hybrid incremental ensemble learning (HIEL) approach which takes into consideration the feature space and the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 38 publications
(19 citation statements)
references
References 67 publications
0
19
0
Order By: Relevance
“…There are also hybrid approaches to weigh base classifiers in Random Subspace [45], and weigh feature subspaces in Bagging [46]. Yu et al [47]…”
Section: Ensemble Methodsmentioning
confidence: 99%
“…There are also hybrid approaches to weigh base classifiers in Random Subspace [45], and weigh feature subspaces in Bagging [46]. Yu et al [47]…”
Section: Ensemble Methodsmentioning
confidence: 99%
“…It only selects good base classifiers from the original ensemble model each time to predict new data and adds or deletes classifiers by demands. Z. Yu et al [21] proposed a hybrid incremental ensemble learning (HIEL) approach, which adopts Bagging algorithm to generate diverse ensemble members while using AdaBoost to evaluate the importance of instances, and incrementally change the weights of base classifiers. In [22], according to the classification performance and stochastic sensitivities, it adopts dynamic cost-sensitive weighting method to relieve concept drift.…”
Section: Related Workmentioning
confidence: 99%
“…The number of decision trees and the maximum tree depth are two important parameters for RF and ERF according to [6]. Therefore, they are developed as terminals, N T and M D. The values for N T is in the range of [50,500] with a step of 10 and the values for M D is in the range of [10,100] with a step of 10. The maximum values for N T and M D are set according to that in [6].…”
Section: New Terminal Setmentioning
confidence: 99%