2021
DOI: 10.1016/j.ins.2020.06.017
|View full text |Cite
|
Sign up to set email alerts
|

Multi-label classification with weighted classifier selection and stacked ensemble

Abstract: Multi-label classification has attracted increasing attention for use in various application scenarios, such as medical diagnosis and semantic annotation. A large number of algorithms have been proposed for multi-label classification where many are ensemble-based. However, these ensemble-based methods usually employ bagging schemes for ensemble construction, with comparatively few stacked ensembles for multilabel classification. Existing research on stacked ensemble schemes remains active, but several issues r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 77 publications
(31 citation statements)
references
References 46 publications
0
31
0
Order By: Relevance
“…With the saved models and the predictions in the validation set, a metalayer (another learning algorithm) is used to learn how to combine the predictions in the combination. Recent work reported high effectiveness with stacking for multiple ATC tasks, such as topic classification (Campos et al, 2017;Abuhaiba and Dawoud, 2017), sentiment analysis (Carvalho and Plastino, 2020;Onan et al, 2016) and multi-label classification (Xia et al, 2020;Weng et al, 2019). Particularly, stacking provided substantial effectiveness improvements on recently proposed decisiontree-based algorithms (Campos et al, 2017) and with methods trained on different representations (including word embeddings) (Carvalho and Plastino, 2020;Pelle et al, 2018;Onan et al, 2016).…”
Section: Stackingmentioning
confidence: 99%
“…With the saved models and the predictions in the validation set, a metalayer (another learning algorithm) is used to learn how to combine the predictions in the combination. Recent work reported high effectiveness with stacking for multiple ATC tasks, such as topic classification (Campos et al, 2017;Abuhaiba and Dawoud, 2017), sentiment analysis (Carvalho and Plastino, 2020;Onan et al, 2016) and multi-label classification (Xia et al, 2020;Weng et al, 2019). Particularly, stacking provided substantial effectiveness improvements on recently proposed decisiontree-based algorithms (Campos et al, 2017) and with methods trained on different representations (including word embeddings) (Carvalho and Plastino, 2020;Pelle et al, 2018;Onan et al, 2016).…”
Section: Stackingmentioning
confidence: 99%
“…There are different ensemble learning techniques in literature, which aggregates the results of several base learners to achieve the final decision 41 . These techniques can be categorized according to the type of base learners into (1) ensemble of the same base learners with different parameters (e.g., different training samples), 42 and (2) ensemble of the different base learners 43 . There are also some hybrid ensemble learning and feature selection methods, 44‐48 which utilized a feature selection algorithm to select an appropriate feature subset for the ensemble learning model.…”
Section: Literature Reviewmentioning
confidence: 99%
“…MLWSE [18] is a stacked ensemble algorithm which uses pairwise label correlations and it uses label-specific meta features to facilitate the classification process. In this algorithm, two classifiers with highly correlated labels share high similar weights as opposed to two classifiers with weakly correlated labels.…”
Section: Toy Examplementioning
confidence: 99%