Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence 2020
DOI: 10.24963/ijcai.2020/348
|View full text |Cite
|
Sign up to set email alerts
|

Multi-label Feature Selection via Global Relevance and Redundancy Optimization

Abstract: Information theoretical based methods have attracted a great attention in recent years, and gained promising results to deal with multi-label data with high dimensionality. However, most of the existing methods are either directly transformed from heuristic single-label feature selection methods or inefficient in exploiting labeling information. Thus, they may not be able to get an optimal feature selection result shared by multiple labels. In this paper, we propose a general global optimization framew… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 60 publications
(24 citation statements)
references
References 13 publications
0
22
0
Order By: Relevance
“…Feature selection method can reduce spatial dimensions and decrease model training time. Therefore, this article uses principal component analysis (PCA) ( Abdi and Williams, 2010 ), GRRO ( Zhang et al , 2020a,b,c ), MDFS ( Zhang et al , 2019 ), MDDM ( Zhang and Zhou, 2010 ), MVMD ( Xu et al , 2016 ), MLSI ( Yu et al , 2005 ) to eliminate irrelevant features. Through LOOCV test, the feature subset obtained by each method is put into MLFE.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…Feature selection method can reduce spatial dimensions and decrease model training time. Therefore, this article uses principal component analysis (PCA) ( Abdi and Williams, 2010 ), GRRO ( Zhang et al , 2020a,b,c ), MDFS ( Zhang et al , 2019 ), MDDM ( Zhang and Zhou, 2010 ), MVMD ( Xu et al , 2016 ), MLSI ( Yu et al , 2005 ) to eliminate irrelevant features. Through LOOCV test, the feature subset obtained by each method is put into MLFE.…”
Section: Resultsmentioning
confidence: 99%
“…Five datasets are used to verify the effectiveness of the model. The Gram-positive bacteria dataset ( Dehzangi et al , 2015 ) is the training set, while the Gram-negative bacteria dataset ( Dehzangi et al , 2015 ), the virus dataset ( Shen and Chou, 2010 ), the SARS-CoV-2 dataset ( Zhang et al , 2020 b ) and the newPlant dataset ( Wan et al , 2012 ) are the test sets together. The Gram-positive bacteria dataset, Gram-negative bacteria dataset, virus dataset come from the Swiss-Prot database and the breakdown of each dataset is shown in Supplementary Tables S1–S3 .…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations