2019
DOI: 10.3390/e21020138
|View full text |Cite
|
Sign up to set email alerts
|

A Neighborhood Rough Sets-Based Attribute Reduction Method Using Lebesgue and Entropy Measures

Abstract: For continuous numerical data sets, neighborhood rough sets-based attribute reduction is an important step for improving classification performance. However, most of the traditional reduction algorithms can only handle finite sets, and yield low accuracy and high cardinality. In this paper, a novel attribute reduction method using Lebesgue and entropy measures in neighborhood rough sets is proposed, which has the ability of dealing with continuous numerical data whilst maintaining the original classification i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 21 publications
(19 citation statements)
references
References 56 publications
(104 reference statements)
0
17
0
2
Order By: Relevance
“…The values of F F are displayed in Table 30. The critical value of F (5,20) for α = 0.10 is 3.21, and the null hypothesis at α = 0.10 will be rejected. In addition, for the Bonferroni-Dunn test, q α = 2.326 is calculated at a significance level of α = 0.10 and CD = 2.752 (s = 5 and T = 6).…”
Section: G Statistical Analysismentioning
confidence: 96%
See 2 more Smart Citations
“…The values of F F are displayed in Table 30. The critical value of F (5,20) for α = 0.10 is 3.21, and the null hypothesis at α = 0.10 will be rejected. In addition, for the Bonferroni-Dunn test, q α = 2.326 is calculated at a significance level of α = 0.10 and CD = 2.752 (s = 5 and T = 6).…”
Section: G Statistical Analysismentioning
confidence: 96%
“…Improved technologies have been developed and rapid progress has been made in feature selection for multilabel data [2], [3]. Feature selection models can be categorized as filter, wrapper and embedded methods [4], [5]. Filter methods select features according to the intrinsic properties of datasets, such as the distance, dependency, and information gain [6].…”
Section: With the Development Of Data Processing Technologies In Machmentioning
confidence: 99%
See 1 more Smart Citation
“…(1) Leukemia1 dataset consists of 7129 genes and 72 samples with two subtypes: patients and healthy people (Sun et al, 2019a). (2) Leukemia2 dataset consists of 5327 genes and 72 samples with three subtypes: ALL-T (acute lymphoblastic leukemia, T-cell), ALL-B (acute lymphoblastic leukemia, B-cell), and AML (acute myeloid leukemia) (Dong et al, 2018).…”
Section: Gene Expression Data Setsmentioning
confidence: 99%
“…Even thousands of attributes may be acquired in some real-world databases. In order to shorten the processing time and obtain better generalization, the attribute reduction problem attracts more and more attention in recent years [ 5 , 7 , 8 ].…”
Section: Introductionmentioning
confidence: 99%