1991
DOI: 10.1109/72.80344
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive nearest neighbor pattern classification

Abstract: A variant of nearest-neighbor (NN) pattern classification and supervised learning by learning vector quantization (LVQ) is described. The decision surface mapping method (DSM) is a fast supervised learning algorithm and is a member of the LVQ family of algorithms. A relatively small number of prototypes are selected from a training set of correctly classified samples. The training set is then used to adapt these prototypes to map the decision surface separating the classes. This algorithm is compared with NN p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
59
0
2

Year Published

1992
1992
2018
2018

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 118 publications
(61 citation statements)
references
References 3 publications
0
59
0
2
Order By: Relevance
“…13 Akin to the VQ family is the clustering-and-relabeling approach which itself has many variations, the most popular of which are perhaps the c-means families shown in Figure 3. 1 Cluster centroids are used as the replacement prototypes V , so any clustering method that produces centroids r can be used for this purpose.…”
Section: žmentioning
confidence: 99%
“…13 Akin to the VQ family is the clustering-and-relabeling approach which itself has many variations, the most popular of which are perhaps the c-means families shown in Figure 3. 1 Cluster centroids are used as the replacement prototypes V , so any clustering method that produces centroids r can be used for this purpose.…”
Section: žmentioning
confidence: 99%
“…In the learning phase the values of the prototypes are updated according to the training samples used for learning. Two di erent adaptative learning approaches can be adopted: the ® rst is based on the use of learning vector quantization ( LVQ) methods proposed by Kohonen ( 1995) and the second is the decision surface mapping ( DSM ) algorithm proposed by Geva and Sitte ( 1991 ).…”
Section: Non-parametri C Classi® Ersmentioning
confidence: 99%
“…We desire non ad hoc methods and a good initiai N 2 choice (else training time is excessive, optimization is not necessarily obtained and comparisons are not easily possible). Techniques [20] that select the number of N 2 neurons per class based upon the a priori probability of each class occurring did not perform well (we attribute this to the fact that the number of N 2 neurons per class should be based on how disjoint a class is and how similar two classes are). Methods which use linear algebra [21] and covariance [221 techniques to calculate subspaces etc.…”
Section: Approachesmentioning
confidence: 99%