Machine Learning Proceedings 1994 1994
DOI: 10.1016/b978-1-55860-335-6.50043-x
|View full text |Cite
|
Sign up to set email alerts
|

Prototype and Feature Selection by Sampling and Random Mutation Hill Climbing Algorithms

Abstract: With the goal of reducing computational costs without sacrificing accuracy, we describe two algorithms to find sets of prototypes for nearest neighbor classification. Here, the term "prototypes" refers to the reference instances used in a nearest neighbor computation-the instances with respect to which similarity is assessed in order to assign a class to a new data item. Both algorithms rely on stochastic techniques to search the space of sets of prototypes and are simple to implement. The first is a Monte Car… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
191
0
4

Year Published

2000
2000
2022
2022

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 347 publications
(195 citation statements)
references
References 10 publications
0
191
0
4
Order By: Relevance
“…These two methods described in [11] by Skalak are based on stochastic behavior. MC1 in each iteration use Monte Carlo to draw new set of instances and to test the accuracy.…”
Section: Monte Carlo 1 (Mc1) and Random Mutation Hill Climbing (Rmhc)mentioning
confidence: 99%
See 1 more Smart Citation
“…These two methods described in [11] by Skalak are based on stochastic behavior. MC1 in each iteration use Monte Carlo to draw new set of instances and to test the accuracy.…”
Section: Monte Carlo 1 (Mc1) and Random Mutation Hill Climbing (Rmhc)mentioning
confidence: 99%
“…A group of three algorithms were inspired by encoding length principle [8]. Other algorithms were derived from graph theory [9], sets theory [10] or Monte Carlo sampling [11].…”
Section: Introductionmentioning
confidence: 99%
“…5 GA based frature selection methods 6 are usually found to perform better than other heuristic search methods for large sized data sets; however, they also require considerable computation time for large data sets. Other attempts to decrease the computational time of feature selection include probabilistic search methods such as random hill climbing, 7 SCHEMATA+, 8 and Las Vegas Filter approach. 9 The artificial neural networks (ANNs) have been grown in popularity and used extensively to perform various classification tasks.…”
Section: Introductionmentioning
confidence: 99%
“…Thus, it is often preferable for many high dimensional problems to employ heuristic methods that compromise subset optimality for better computational efficiency. A few examples of such search strategies are sequential search [5,6], floating search [7][8][9], random mutation hill climbing [10] and evolutionary-based approaches [11][12][13][14] .…”
Section: Introductionmentioning
confidence: 99%