2020
DOI: 10.3390/e22070797
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Label Feature Selection Based on High-Order Label Correlation Assumption

Abstract: Multi-label data often involve features with high dimensionality and complicated label correlations, resulting in a great challenge for multi-label learning. Feature selection plays an important role in multi-label learning to address multi-label data. Exploring label correlations is crucial for multi-label feature selection. Previous information-theoretical-based methods employ the strategy of cumulative summation approximation to evaluate candidate features, which merely considers low-order label correlation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 20 publications
(14 citation statements)
references
References 45 publications
0
14
0
Order By: Relevance
“…The proposed algorithm PDMFS is compared with five feature selection algorithms, namely Multi-Label Dimensionality Reduction via Dependence Maximization (MDDMspc, MDDMproj) [27], Feature Selection for Multi-label Classification Using Multivariate Mutual Information (PMU) [26], Multi-label Feature Selection algorithm based on Neighborhood Mutual Information (MFNMIpes) [31], Multi-label feature selection using multi-criteria decision making (MFS-MCDM) [28], Multi-Label Feature Selection Considering the Max-Correlation in high-order label (MCMFS) [11]. PMU, MDDMspc, MDDMproj, MFNMIpes and as classical algorithms for information metrics in feature selection are used to compare the effectiveness of PDMFS performance.…”
Section: Parameter Settings and Experimental Resultsmentioning
confidence: 99%
“…The proposed algorithm PDMFS is compared with five feature selection algorithms, namely Multi-Label Dimensionality Reduction via Dependence Maximization (MDDMspc, MDDMproj) [27], Feature Selection for Multi-label Classification Using Multivariate Mutual Information (PMU) [26], Multi-label Feature Selection algorithm based on Neighborhood Mutual Information (MFNMIpes) [31], Multi-label feature selection using multi-criteria decision making (MFS-MCDM) [28], Multi-Label Feature Selection Considering the Max-Correlation in high-order label (MCMFS) [11]. PMU, MDDMspc, MDDMproj, MFNMIpes and as classical algorithms for information metrics in feature selection are used to compare the effectiveness of PDMFS performance.…”
Section: Parameter Settings and Experimental Resultsmentioning
confidence: 99%
“…The example-based evaluation criteria include the Hamming Loss (HL) and Zero One Loss (ZOL) [ 33 ]. The lower the value of these two indicators, the better the classification effect.…”
Section: Preliminariesmentioning
confidence: 99%
“…Problem transform is the conversion of multi-label learning into traditional single-label learning, such as Binary Relevance (BR) [ 36 ], Pruned Problem Transformation (PPT) [ 37 ], and Label Power (LP) [ 38 ]. BR treats the prediction of each label as an independent single classification issue and trains an individual classifier for each label with all of the training data [ 33 ]. However, it ignores the relationships between the labels, so it is possible to end up with imbalanced data.…”
Section: Related Workmentioning
confidence: 99%
“…It should however be noted that most of these research works focus on a limited/specific number of species. The post processing i.e., after the microbe segmentation, the segmented images are subjected to the feature extraction [ 26 ] and selection [ 27 ] process. Recent developments [ 22 , 28 , 29 ] are based on extracting morphological features for building the identification system.…”
Section: Introductionmentioning
confidence: 99%