2020
DOI: 10.1109/access.2020.2982536
|View full text |Cite
|
Sign up to set email alerts
|

Multilabel Feature Selection Using Relief and Minimum Redundancy Maximum Relevance Based on Neighborhood Rough Sets

Abstract: Recently, multilabel classification is of increasing interest in machine learning and artificial intelligence. However, the distances of samples in most Relief methods easily result in heterogeneous or similar samples abnormal when the distances are very large. Besides, the classification margin as a neighborhood radius for some reduction algorithms may be meaningless when the margin is too large. To overcome these drawbacks, this paper presents a multilabel feature selection method using the improved Relief a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 16 publications
(13 citation statements)
references
References 45 publications
(76 reference statements)
0
13
0
Order By: Relevance
“…Similarly, the appropriate connection parameters on the Health, Computers, Yeast, Business, and Reference datasets have been achieved in Section IV.B. Following the experiments provided in [3], [43], [49], the five datasets are selected from Table 3 for comparison in this experiment. The values of the six indices on five multilabel datasets are shown in Tables 8-12, respectively. As seen from Table 8, MFS-MIRF achieves the best values of the AP, HL, and OE indices.…”
Section: Classification Results Of Various Feature Selection Methomentioning
confidence: 99%
See 4 more Smart Citations
“…Similarly, the appropriate connection parameters on the Health, Computers, Yeast, Business, and Reference datasets have been achieved in Section IV.B. Following the experiments provided in [3], [43], [49], the five datasets are selected from Table 3 for comparison in this experiment. The values of the six indices on five multilabel datasets are shown in Tables 8-12, respectively. As seen from Table 8, MFS-MIRF achieves the best values of the AP, HL, and OE indices.…”
Section: Classification Results Of Various Feature Selection Methomentioning
confidence: 99%
“…According to the experiments [45], a classifier is set through the simplified training datasets, and the classification results are usually implemented on the simplified test datasets. Therefore, these datasets in Table 3 will be divided into training datasets and test datasets, and the training datasets are used for feature selection, and the test datasets evaluate the classification performance of various methods on the different classifier [3] including MLKNN, Lazy, Rules, Trees, and Bayes. To evaluate the effectiveness of our algorithm in multilabel classification, six metrics are employed as follows: the number of features selected (N), Average Precision (AP), Hamming Loss (HL), Ranking Loss (RL), One Error (OE), and Coverage (CV) [2].…”
Section: A Experiments Preparationmentioning
confidence: 99%
See 3 more Smart Citations