2013
DOI: 10.1007/978-3-642-38067-9_13
|View full text |Cite
|
Sign up to set email alerts
|

Selective Ensemble of Classifier Chains

Abstract: Abstract. In multi-label learning, the relationship among labels is well accepted to be important, and various methods have been proposed to exploit label relationships. Amongst them, ensemble of classifier chains (ECC) which builds multiple chaining classifiers by random label orders has drawn much attention. However, the ensembles generated by ECC are often unnecessarily large, leading to extra high computational and storage cost. To tackle this issue, in this paper, we propose selective ensemble of classifi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
9
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 24 publications
1
9
0
Order By: Relevance
“…All these method have in common with the original CC that they can only effectively exploit one direction of a dependency between two labels. Li and Zhou (2013) propose to filter the CCs of an ensemble of CCs (Read et al 2011) in order to maximize F1. Unfortunately, no post-analysis is performed of the useful pairing directions.…”
Section: Label Dependencies and Stackingmentioning
confidence: 99%
“…All these method have in common with the original CC that they can only effectively exploit one direction of a dependency between two labels. Li and Zhou (2013) propose to filter the CCs of an ensemble of CCs (Read et al 2011) in order to maximize F1. Unfortunately, no post-analysis is performed of the useful pairing directions.…”
Section: Label Dependencies and Stackingmentioning
confidence: 99%
“…In traditional supervised learning, a number of methods have been developed based on different techniques, such as genetic algorithm [35], semi-definite programming [33], clustering [9], 1 -norm regularized sparse optimization [14]. In [15], in order to reduce the size of ECC, Li and Zhou proposed SECC (i.e., selective ensemble of classifier chains), which to our best knowledge is the first work on selective ensemble in the multi-label setting.…”
Section: Related Workmentioning
confidence: 99%
“…Note that the ensemble size of ECC100 is 100, and the Random strategy generates ensembles of the same size of MUSE. [12,17,15]. Moreover, compared with previous work [15], the MUSE approach is more general, for instance, it can optimize a large variety of performance measures while SECC considers only F1-score.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Previous studies (Vinyals, Bengio, and Kudlur 2016;Yang et al 2018a) show that ordering has a significant impact on the performance. This issue also appears in the PCC, It is addressed by ensemble averaging (Read et al 2011;Cheng, Hüllermeier, and Dembczynski 2010), ensemble pruning (Li and Zhou 2013), pre-analysis of the label dependencies by Bayes nets (Sucar et al 2014) and integrating beam search with training to determine a suitable tag ordering (Kumar et al 2013). However, these approaches rely on training multiple models to ensemble or determine a proper order, which is computationally expensive.…”
Section: Introductionmentioning
confidence: 99%