The 2010 International Joint Conference on Neural Networks (IJCNN) 2010
DOI: 10.1109/ijcnn.2010.5596323
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised and adaptive category classification for a vision-based mobile robot

Abstract: Ahstract-This paper presents an unsupervised category classification method for time-series images that combines incre mental learning of Adaptive Resonance Theory-2 (ART-2) and self-mapping characteristic of Counter Propagation Networks (CPNs). Our method comprises the following procedures: 1) generating visual words using Self-Organizing Maps (SOM) from 128-dimensional descriptors in each feature point of a Scale-Invariant Feature Transform (SIF T), 2) forming labels using unsupervised learning of ART-2, and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 8 publications
(9 reference statements)
0
2
0
Order By: Relevance
“…Furthermore, the combination of ART-2 and CPNs enables unsupervised category formation that labels a large quantity of images in each category automatically. Table 4 shows parameters of OC-SVMs, ART-2, and CPNs based on our former study (Tsukada et al, 2010(Tsukada et al, , 2011Madokoro et al, 2011b). Herein, we compared our method (Tsukada et al, 2010) with the method proposed by Chen et al (2009) using the Caltech-256 object category dataset (Griffin et al, 2007).…”
Section: Creating a Category Map Using Cpnsmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, the combination of ART-2 and CPNs enables unsupervised category formation that labels a large quantity of images in each category automatically. Table 4 shows parameters of OC-SVMs, ART-2, and CPNs based on our former study (Tsukada et al, 2010(Tsukada et al, , 2011Madokoro et al, 2011b). Herein, we compared our method (Tsukada et al, 2010) with the method proposed by Chen et al (2009) using the Caltech-256 object category dataset (Griffin et al, 2007).…”
Section: Creating a Category Map Using Cpnsmentioning
confidence: 99%
“…Table 4 shows parameters of OC-SVMs, ART-2, and CPNs based on our former study (Tsukada et al, 2010(Tsukada et al, , 2011Madokoro et al, 2011b). Herein, we compared our method (Tsukada et al, 2010) with the method proposed by Chen et al (2009) using the Caltech-256 object category dataset (Griffin et al, 2007). We obtained a result that the performance of our method was superior to their method, although the target dataset was aimed at generic object recognition.…”
Section: Creating a Category Map Using Cpnsmentioning
confidence: 99%