2002
DOI: 10.1109/34.990133
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised feature selection using feature similarity

Abstract: AbstractÐIn this article, we describe an unsupervised feature selection algorithm suitable for data sets, large in both dimension and size. The method is based on measuring similarity between features whereby redundancy therein is removed. This does not need any search and, therefore, is fast. A new feature similarity measure, called maximum information compression index, is introduced. The algorithm is generic in nature and has the capability of multiscale representation of data sets. The superiority of the a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
556
0
29

Year Published

2007
2007
2014
2014

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 1,280 publications
(614 citation statements)
references
References 20 publications
1
556
0
29
Order By: Relevance
“…The value of λ is zero when the features are linearly dependent and increases as the amount of dependence decreases. It has been proved that this index outperforms some conventional approaches to feature selection, such as branch and bound, sequential forward search, sequential floating forward search, and stepwise clustering, both in accuracy and CPU time [11]. See [11] for a detailed description of the S-Index.…”
Section: Dimension Reduction For the Spatial Featuresmentioning
confidence: 99%
See 2 more Smart Citations
“…The value of λ is zero when the features are linearly dependent and increases as the amount of dependence decreases. It has been proved that this index outperforms some conventional approaches to feature selection, such as branch and bound, sequential forward search, sequential floating forward search, and stepwise clustering, both in accuracy and CPU time [11]. See [11] for a detailed description of the S-Index.…”
Section: Dimension Reduction For the Spatial Featuresmentioning
confidence: 99%
“…This method does not need any searching and therefore is fast. A new feature similarity measure, called maximum information compression index (MICI), is introduced in [11]. Let Σ be the covariance matrix of random variables x and y.…”
Section: Dimension Reduction For the Spatial Featuresmentioning
confidence: 99%
See 1 more Smart Citation
“…Information score (IS): the authors in [27] present an information measure for unlabeled data, shown in Equation 2. The RBF similarity matrix S is used to calculate the entropy of the data, measuring its randomness.…”
Section: Feature Importance Measuresmentioning
confidence: 99%
“…Many tracking algorithms have been proposed and implemented in applications to overcome these disturbing problems. These algorithms can be divided into three main categories: the featurebased tracking algorithms (Mitra et al, 2002;Tissainayagam and Suter, 2005;Nickels and Hutchinson, 2002), the contour-based tracking algorithms (Paragios and Deriche, 2000;Freedman and Zhang, 2004;Linlin et al, 2009), and the region-based tracking algorithms (Bascle and Deriche, 1995;Jepson et al, 2003). In the last category, the region content is either used directly with template tracking or represented by a nonparametric description, such as a histogram.…”
Section: Introductionmentioning
confidence: 99%