The 2011 International Joint Conference on Neural Networks 2011
DOI: 10.1109/ijcnn.2011.6033525
|View full text |Cite
|
Sign up to set email alerts
|

A SOM combined with KNN for classification task

Abstract: Classification is a common task that humans perform when making a decision. Techniques of Artificial Neural Networks (ANN) or statistics are used to help in an automatic classification. This work addresses a method based in Self-Organizing Maps ANN (SOM) and K-Nearest Neighbor (KNN) statistical classifier, called SOM-KNN, applied to digits recognition in car plates. While being much faster than more traditional methods, the proposed SOM-KNN keeps competitive classification rates with respect to them. The exper… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 13 publications
0
14
0
Order By: Relevance
“…K-NN [1], FK-NN [2], EK-NN [11], SOM-KNN [34] and BK-NN [16]), and ENN classifier [12]. The different methods have been programmed and tested with Matlab TM software.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…K-NN [1], FK-NN [2], EK-NN [11], SOM-KNN [34] and BK-NN [16]), and ENN classifier [12]. The different methods have been programmed and tested with Matlab TM software.…”
Section: Methodsmentioning
confidence: 99%
“…K-NN [1], FK-NN [2], SOM-KNN [34], EK-NN [11] and BK-NN [16]). The basic information about the used data sets are given in Table IV.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…kNN classifies object based on the minimal distance to training examples by a majority vote of its neighbors [9]. Specifically, the object is assigned to the most common class among its k-nearest neighbors.…”
Section: K-nearest Neighbormentioning
confidence: 99%
“…The algorithm strategy for classification comprises three operations: (i) an unlabeled sample is compared to dataset training through a similarity measure; (ii) the labeled objects are sorted in order of similarity to the unlabeled sample; and finally, (iii) the classification occurs giving the unlabeled sample the majority class of the nearest neighbors objects. Because of its simplified algorithm (three basic operations steps), and reduced number of parameters (similarity measure and the k number of nearest neighbor), this instance-based learning algorithm is widely used in the data mining community as a benchmarking algorithm [15]. …”
Section: Introductionmentioning
confidence: 99%