2017
DOI: 10.1016/j.knosys.2017.02.014
|View full text |Cite
|
Sign up to set email alerts
|

Self-training for multi-target regression with tree ensembles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 55 publications
(29 citation statements)
references
References 45 publications
0
27
0
1
Order By: Relevance
“…In our future work, we intend to pursue extensive empirical experiments to compare the proposed WvEnSL with other algorithms belonging to different SSL classes, and evaluate its performance using various component self-labeled algorithms and base learners. Furthermore, since our preliminary numerical experiments are quite encouraging, our next step is to explore the performance of the proposed algorithm on imbalanced datasets [39,40] and incorporate our proposed methodology for multi-target problems [41][42][43]. Additionally, another interesting aspect is the use of other component classifiers in the ensemble and enhance our proposed framework with more sophisticated and theoretically sound criteria for the development of an advanced weighted voting strategy.…”
Section: Discussionmentioning
confidence: 99%
“…In our future work, we intend to pursue extensive empirical experiments to compare the proposed WvEnSL with other algorithms belonging to different SSL classes, and evaluate its performance using various component self-labeled algorithms and base learners. Furthermore, since our preliminary numerical experiments are quite encouraging, our next step is to explore the performance of the proposed algorithm on imbalanced datasets [39,40] and incorporate our proposed methodology for multi-target problems [41][42][43]. Additionally, another interesting aspect is the use of other component classifiers in the ensemble and enhance our proposed framework with more sophisticated and theoretically sound criteria for the development of an advanced weighted voting strategy.…”
Section: Discussionmentioning
confidence: 99%
“…Applying unlabeled data in semi-supervised self-training is beneficial, but in some cases, it may degrade the classifier's performance if they are incorrectly labeled as an improper class by the initial classifier (Piroonsup et al, 2018;Levatić et al, 2017). Some studies try to avoid this issue mainly by post-processing, e.g., self-training with editing (Zhou et al, 2005), or applying a noise-filtering method to remove the mislabeled data (Triguero et al, 2014).…”
Section: Reliability Scorementioning
confidence: 99%
“…where ( ) is the prediction for sample returned by the jth estimator, and M(u) is the prediction for returned by the ensemble (i.e., the average of the predictions across all trees). This variance measure has been previously used in the context of bagging, where it performed the best among various approaches for estimating the reliability of regression predictions (Bosni et al, 2008;Levatić et al, 2017).…”
Section: Reliability Scorementioning
confidence: 99%
See 1 more Smart Citation
“…Most recently, Levati et al (2017) proposed an algorithm to automatically identify an appropriate threshold from a candidate list for the reliability of predictions. The automatic selected threshold is used in the next iteration of self-training procedure.…”
Section: Related Workmentioning
confidence: 99%