1977
DOI: 10.1109/tc.1977.1674939
|View full text |Cite
|
Sign up to set email alerts
|

A Branch and Bound Algorithm for Feature Subset Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
463
0
16

Year Published

1998
1998
2017
2017

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 1,068 publications
(506 citation statements)
references
References 6 publications
0
463
0
16
Order By: Relevance
“…Four groups of experiments have been made on SIMPLIcity dataset: one with all the features without selection, the second with features selected with filter methods, such as fisher filter [23] and principal component analysis (PCA) [26], the third with features selected using a wrapper method, such as SFS, and the last with the best features selected by ESFS. Five types of one step global classifiers are tested: Multi-layer Perceptron (Neural Network, marked as MP in the following text), Decision Tree (C4.5), Linear Discriminant Analysis (LDA), K-Nearest Neighbors (K-NN), and multiclass SVM (C-SVC).…”
Section: Resultsmentioning
confidence: 99%
“…Four groups of experiments have been made on SIMPLIcity dataset: one with all the features without selection, the second with features selected with filter methods, such as fisher filter [23] and principal component analysis (PCA) [26], the third with features selected using a wrapper method, such as SFS, and the last with the best features selected by ESFS. Five types of one step global classifiers are tested: Multi-layer Perceptron (Neural Network, marked as MP in the following text), Decision Tree (C4.5), Linear Discriminant Analysis (LDA), K-Nearest Neighbors (K-NN), and multiclass SVM (C-SVC).…”
Section: Resultsmentioning
confidence: 99%
“…However, several other feature selection or extraction algorithms leading to dimensionality reduction could not be evaluated due to long computation times required by these methods for the texture feature vector datasets used in this study. In the future, these algorithms, such as Exhaustive Selection (Jain and Zongker, 1997) and Branch and Bound method (Narendra and Fukunaga, 1977) can be applied to the existing feature vector datasets provided that they can be carried out within limits of computational feasibility.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…These problems include approximate nearest neighbor search [2], Gaussian summation [39], particle smoothing [33], Gaussian process regression [59], clustering [34], feature subset selection [52], and mixture model training [48]. More recently, Gray and Moore proposed using a second tree for problems with large query sets [25], such as all-nearest-neighbors and density estimation [26].…”
Section: Speedups Via Treesmentioning
confidence: 99%