2018
DOI: 10.1016/j.patcog.2018.02.020
|View full text |Cite
|
Sign up to set email alerts
|

Class-specific mutual information variation for feature selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
35
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
5

Relationship

2
8

Authors

Journals

citations
Cited by 122 publications
(35 citation statements)
references
References 25 publications
0
35
0
Order By: Relevance
“…The dependency between the original signal and the subsignal obtained by VMD can be characterized by MI. MI is used to reflect the degree of correlation between two random variables and its ability of distinguish correlation is higher than the correlation coefficient method [52]. The mutual information of two discrete random variables is defined as:…”
Section: B Mutual Information Variational Mode Decompositionmentioning
confidence: 99%
“…The dependency between the original signal and the subsignal obtained by VMD can be characterized by MI. MI is used to reflect the degree of correlation between two random variables and its ability of distinguish correlation is higher than the correlation coefficient method [52]. The mutual information of two discrete random variables is defined as:…”
Section: B Mutual Information Variational Mode Decompositionmentioning
confidence: 99%
“…The filter approach is computationally efficient but usually yields lower prediction accuracy than the wrapper approach. Recent studies on this approach have focused on maximizing variable relevancy while minimizing variable redundancy based on information theory [28][29][30]. The wrapper approach evaluates variable subsets by building prediction models directly on the subsets using a learning algorithm [31].…”
Section: Related Workmentioning
confidence: 99%
“…The high-dimensional multi-label data set often contains a large number of irrelevant and redundant features that bring many disadvantages to the multi-label learning such as the computational burden and over-fitting [ 10 , 11 , 12 ]. To address this problem, many multi-label feature selection techniques have been proposed to select the informative feature subset from the original feature set and to discard irrelevant and redundant features [ 13 , 14 , 15 ].…”
Section: Introductionmentioning
confidence: 99%