1969
DOI: 10.1109/tit.1969.1054354
|View full text |Cite
|
Sign up to set email alerts
|

Nonparametric feature selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

1975
1975
2011
2011

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 48 publications
(14 citation statements)
references
References 3 publications
0
14
0
Order By: Relevance
“…These measures were designed to capture such differences without assuming the distribution or any prior knowledge of the samples. Therefore, statistical distance measures (that are also computationally more complicated), such as Mahalanobis [51], Kolmogorov [1], Bhattacharyya [7], Bayesian distance [8], Chernoff [12], Matsusita [52], and divergence [40], [46], and those utilized in pattern recognition [50], [57], have not been used in our experiment.…”
Section: A Quantizationmentioning
confidence: 99%
“…These measures were designed to capture such differences without assuming the distribution or any prior knowledge of the samples. Therefore, statistical distance measures (that are also computationally more complicated), such as Mahalanobis [51], Kolmogorov [1], Bhattacharyya [7], Bayesian distance [8], Chernoff [12], Matsusita [52], and divergence [40], [46], and those utilized in pattern recognition [50], [57], have not been used in our experiment.…”
Section: A Quantizationmentioning
confidence: 99%
“…Several supervised linear dimensionality reduction methods exist in the literature. We can group these methods into three broad categories: those that separate likelihood functions according to some distance or divergence [38]- [44], those that try to make the probability of the labels given the measurements and the probability of the labels given the dimensionality-reduced measurements equal [45]- [50] and those that attempt to minimize a specific classification or regression objective [12], [51]- [54].…”
Section: A Relationship To Prior Workmentioning
confidence: 99%
“…The method of [38], like FDA, maximally separates the clusters of the different labels but does not make the strong Gaussian assumption. Instead, it performs kernel density estimation of the likelihoods and separates those estimates.…”
Section: A Relationship To Prior Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Patrick and Fischer [2] represent an early example of using kernel density estimators to project high-dimensional measurements to low-dimensional representations, albeit in a different context. Our approach combines the flexibility of nonparametric estimates with information-preserving subspaces in a dependency scenario.…”
Section: Introductionmentioning
confidence: 99%