2015
DOI: 10.1016/j.ins.2015.05.019
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection for clustering using instance-based learning by exploring the nearest and farthest neighbors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
12
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 53 publications
(47 reference statements)
0
12
0
Order By: Relevance
“…After defining S i , the salient features are identified based on their ability to distinguish between the nearest and farthest neighbours. The j th feature s j is more salient for i th data point, if it is more dependent on the target variable z and it is less redundant in comparison with other features (Chen, ). Accordingly, the salience vector for the i th data point is defined as u i = [ u i 1 … u ij … u if ] where the salience of the j th feature is calculated according to Equation : uij=D(),sjcR()sj, where D (.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…After defining S i , the salient features are identified based on their ability to distinguish between the nearest and farthest neighbours. The j th feature s j is more salient for i th data point, if it is more dependent on the target variable z and it is less redundant in comparison with other features (Chen, ). Accordingly, the salience vector for the i th data point is defined as u i = [ u i 1 … u ij … u if ] where the salience of the j th feature is calculated according to Equation : uij=D(),sjcR()sj, where D (.…”
Section: Methodsmentioning
confidence: 99%
“…In the last two equations, MI (. ) denotes mutual information criterion that is calculated for the two discrete random variables of X and Y (Chen, ).…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations