1978
DOI: 10.1016/0031-3203(78)90008-0
|View full text |Cite
|
Sign up to set email alerts
|

On the optimal number of features in the classification of multivariate Gaussian data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
39
2

Year Published

1989
1989
2016
2016

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 100 publications
(43 citation statements)
references
References 22 publications
2
39
2
Order By: Relevance
“…In practice, feature selection must proceed from sample data, which leads to the well-known peaking phenomenon, ie, the tendency of achieving improved classification performance with an increasing number of features only to a point, beyond which more features lead to degradation of the classification accuracy. [12][13][14][15][16] Therefore, employing too many features in a small-sample setting yields poorer classification accuracy, thereby leading to the need for feature selection. This raises a critical question: can one expect a feature selection algorithm to yield a feature set whose error is close to that of an optimal feature set?…”
Section: Cancer Informaticsmentioning
confidence: 99%
“…In practice, feature selection must proceed from sample data, which leads to the well-known peaking phenomenon, ie, the tendency of achieving improved classification performance with an increasing number of features only to a point, beyond which more features lead to degradation of the classification accuracy. [12][13][14][15][16] Therefore, employing too many features in a small-sample setting yields poorer classification accuracy, thereby leading to the need for feature selection. This raises a critical question: can one expect a feature selection algorithm to yield a feature set whose error is close to that of an optimal feature set?…”
Section: Cancer Informaticsmentioning
confidence: 99%
“…The value t (see section 2.2.2) is determined according to bibliographic research [16,12,13,11], in which the optimal dimension size depends on the experimental part in combination with the LVQ algorithms. These algorithms typically operate to preserve neighbourhoods on a network of nodes which encode the feature vector.…”
Section: Feature Vector Extractionmentioning
confidence: 99%
“…The use of feature selection algorithms is motivated by the need for highly precise results, computational reasons and a peaking phenomenon often observed when classifiers are trained with a limited set of training samples. If the number of features is increased, the classification rate of the classifiers decreases after a peak (10,11).…”
Section: Definition Of Features For Detection Of Malignant Melanomamentioning
confidence: 99%