2013
DOI: 10.1016/j.patrec.2012.10.005
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection for multi-label classification using multivariate mutual information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
156
0
2

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 251 publications
(158 citation statements)
references
References 15 publications
0
156
0
2
Order By: Relevance
“…These two datasets are used by both Doquire & Verleysen [4] and Lee & Kim [9] to evaluate their criteria, with which we compare our own in Section 5. Table 2 summarises some characteristics of these datasets.…”
Section: Empirical Comparison Of the Assumptions In The Label Spacementioning
confidence: 99%
See 2 more Smart Citations
“…These two datasets are used by both Doquire & Verleysen [4] and Lee & Kim [9] to evaluate their criteria, with which we compare our own in Section 5. Table 2 summarises some characteristics of these datasets.…”
Section: Empirical Comparison Of the Assumptions In The Label Spacementioning
confidence: 99%
“…We compare J Y:full X:partial , the criterion with the best performance under our analysis, with two different criteria proposed recently in the literature: the pruned transformation criterion proposed by Doquire & Verleysen [4] (we prune rare examples using thresholds given in that work) and the multi-variate mutual information criterion proposed by Lee & Kim [9]. As we can see in Figure 5 the proposed criterion J Y:full X:partial consistently performs well across the different number of selected features and the different datasets.…”
Section: Comparison To the State-of-the-artmentioning
confidence: 99%
See 1 more Smart Citation
“…Multi-label feature selection is considered a solution that can effectively avoid the aforementioned problems [5], [6]. Conventional multi-label feature selection methods evaluate the importance of each feature independently; therefore, the dependencies among features are ignored [2].…”
Section: Introudctionmentioning
confidence: 99%
“…Conventional multi-label feature selection methods evaluate the importance of each feature independently; therefore, the dependencies among features are ignored [2]. As a result, a compact multi-label feature subset cannot be obtained because a selected feature subset will necessarily contain redundant features, that is, features that are similar to one another [6]. To resolve this practical problem, a multi-label feature selection method must consider the feature dependencies during its feature selection process.…”
mentioning
confidence: 99%