2021
DOI: 10.1109/tfuzz.2020.2989098
|View full text |Cite
|
Sign up to set email alerts
|

Feature Selection Using Fuzzy Neighborhood Entropy-Based Uncertainty Measures for Fuzzy Neighborhood Multigranulation Rough Sets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
60
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

5
3

Authors

Journals

citations
Cited by 183 publications
(60 citation statements)
references
References 45 publications
0
60
0
Order By: Relevance
“…In this subsection, to verify the classification effectiveness of the FNSIJE-KS method on UCI datasets, we compare the size and classification accuracy of selected feature subsets by FNSIJE-KS and three existing methods. The three methods include FNRS (method based on fuzzy neighborhood rough set) [32], FNCE (method based on fuzzy neighborhood conditional entropy) [37], and FNPME-FS (method based on fuzzy neighborhood pessimistic multigranulation entropy) [36].…”
Section: Classification Results Of the Uci Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…In this subsection, to verify the classification effectiveness of the FNSIJE-KS method on UCI datasets, we compare the size and classification accuracy of selected feature subsets by FNSIJE-KS and three existing methods. The three methods include FNRS (method based on fuzzy neighborhood rough set) [32], FNCE (method based on fuzzy neighborhood conditional entropy) [37], and FNPME-FS (method based on fuzzy neighborhood pessimistic multigranulation entropy) [36].…”
Section: Classification Results Of the Uci Datasetsmentioning
confidence: 99%
“…The second portion pays attention to the effectiveness of classification for FNSIJE-KS. Tables 8 and 9 list classification accuracy of selected feature subsets by FNSIJE-KS and other three feature selection methods (FNRS [32], FNCE [37], and FNPME-FS [36]) under classifiers KNN and CART, respectively. In Tables 8 and 9, the bolded numbers indicate that the classification accuracy of the selected feature subsets is the best with respect to other methods.…”
Section: Classification Results Of the Uci Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…It is intended to find the concealed linear structures in the original data [ 4 , 5 ]. For the sake of transition from linear to nonlinear function, the following generalization can be made [ 6 ]: by mapping input vectors into a high-dimensional feature space H ( H is Hilbert space) through some nonlinear-mapping, seek the solution of the optimization problem in space H . Using a suitable kernel function , nonlinear-mappings can be estimated by kernel , which is an extended with kernel techniques.…”
Section: Introductionmentioning
confidence: 99%
“…However, there exists the problem of information loss during the conversion. The latter algorithm adaptation extends the specific learning methods to deal with multilabel data directly [13], including neural works [14]- [16], mutual information [17]- [19], ReliefF [20], [21], etc., where deep learning based on neural network models [22]- [24] performs representation learning on data and has achieved many results in image processing and natural language processing [25], [26]. Unfortunately, their time-space cost is expensive.…”
Section: Introductionmentioning
confidence: 99%