We compare eleven methods for finding prototypes upon which to base the nearest Ž prototype classifier. Four methods for prototype selection are discussed: Wilson q Hart a . condensation q error-editing method , and three types of combinatorial searchᎏrandom search, genetic algorithm, and tabu search. Seven methods for prototype extraction are discussed: unsupervised vector quantization, supervised learning vector quantization Ž . with and without training counters , decision surface mapping, a fuzzy version of vector quantization, c-means clustering, and bootstrap editing. These eleven methods can be usefully divided two other ways: by whether they employ pre-or postsupervision; and by whether the number of prototypes found is user-defined or ''automatic.'' Generalization error rates of the 11 methods are estimated on two synthetic and two real data sets. Offering the usual disclaimer that these are just a limited set of experiments, we feel confident in asserting that presupervised, extraction methods offer a better chance for success to the casual user than postsupervised, selection schemes. Finally, our calculations do not suggest that methods which find the ''best'' number of prototypes ''automatically'' are superior to methods for which the user simply specifies the number of prototypes. ᮊ