2018
DOI: 10.1016/j.patrec.2018.06.005
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection considering the composition of feature relevancy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 82 publications
(32 citation statements)
references
References 12 publications
0
32
0
Order By: Relevance
“…In this section, some commonly used feature selection approaches such as a new feature selection (NFS) [74], the unsupervised feature selection (UFS) [75] and mutual information maximization (DRJMIM) [76] are utilized to validate the effectiveness of the proposed method. The fig.10 show the average number of selected by novel feature selection methods.…”
Section: Comparison With the Novel Feature Selection Methodsmentioning
confidence: 99%
“…In this section, some commonly used feature selection approaches such as a new feature selection (NFS) [74], the unsupervised feature selection (UFS) [75] and mutual information maximization (DRJMIM) [76] are utilized to validate the effectiveness of the proposed method. The fig.10 show the average number of selected by novel feature selection methods.…”
Section: Comparison With the Novel Feature Selection Methodsmentioning
confidence: 99%
“…In traditional single-label feature selection methods, the average classification accuracy is usually used to evaluate the already-selected feature subset [2], [4], [10]- [13]. However, the evaluation metrics in multi-label feature selection methods are much more complicated than single-label feature selection methods.…”
Section: B Multi-label Evaluation Metricsmentioning
confidence: 99%
“…The filter approach is computationally efficient but usually yields lower prediction accuracy than the wrapper approach. Recent studies on this approach have focused on maximizing variable relevancy while minimizing variable redundancy based on information theory [28][29][30]. The wrapper approach evaluates variable subsets by building prediction models directly on the subsets using a learning algorithm [31].…”
Section: Related Workmentioning
confidence: 99%