2015
DOI: 10.1080/18756891.2015.1129587
|View full text |Cite
|
Sign up to set email alerts
|

Feature Selection for Multi-label Learning: A Systematic Literature Review and Some Experimental Evaluations

Abstract: Feature selection can remove non-important features from the data and promote better classifiers. This task, when applied to multi-label data where each instance is associated with a set of labels, supports emerging applications. Although multi-label data usually exhibit label relations, label dependence has been little studied in feature selection. We proposed two multi-label feature selection algorithms that consider label relations. These methods were experimentally competitive with traditional approaches. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 17 publications
0
9
0
Order By: Relevance
“…Several FS methods have been analyzed in order to select features from each sample, including Information Gain, Fscore, Relief, mutual information, and normalized mutual information. Based on the existing literature review [27] [30], these methods and their extensions prove to be effective in case of multi-label FS, and in addition, they are able to cope with the feature-label correlation [1].…”
Section: A Baseline Feature Selection Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Several FS methods have been analyzed in order to select features from each sample, including Information Gain, Fscore, Relief, mutual information, and normalized mutual information. Based on the existing literature review [27] [30], these methods and their extensions prove to be effective in case of multi-label FS, and in addition, they are able to cope with the feature-label correlation [1].…”
Section: A Baseline Feature Selection Methodsmentioning
confidence: 99%
“…Relief is the most effective and commonly used FS in multi-label text classification system [27] [30].…”
Section: ) Mutual Information (Mi)mentioning
confidence: 99%
See 1 more Smart Citation
“…Notwithstanding, Spolaôr et al in [35] and [36] have proposed extensions of ReliefF towards the task of MLC by redefining the distance function. More precisely, they propose to use normalized Hamming loss or Jaccard dissimilarity as distance functions between two examples.…”
Section: Related Workmentioning
confidence: 99%
“…In the early days of relational databases, performance issues were extensive because of limited hardware resources and immature optimizers, so performance was a priority consideration. But even today, despite the huge growth in resources, there is even more growth in the amount of data available due to technological advances [1] and the amount and variety of connected devices [2], so performance continues to be of critical importance.…”
Section: Introductionmentioning
confidence: 99%