2002
DOI: 10.1016/s0893-6080(02)00079-5
|View full text |Cite
|
Sign up to set email alerts
|

Generalized relevance learning vector quantization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
216
0
7

Year Published

2005
2005
2012
2012

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 333 publications
(224 citation statements)
references
References 15 publications
1
216
0
7
Order By: Relevance
“…The only modification is the incremental adding and testing of new prototype nodes, but compared to cLVQ no feature weighting and selection is performed. In contrast to this, the cGRLVQ additionally applies a feature weighting based on the GRLVQ method proposed by Hammer & Villmann (2002). The GRLVQ weighting is based on the distance d corr c to the nearest correctly labeled prototype w kcorr(c) and d incorr c to the nearest prototype w k incorr (c) with incorrect label:…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…The only modification is the incremental adding and testing of new prototype nodes, but compared to cLVQ no feature weighting and selection is performed. In contrast to this, the cGRLVQ additionally applies a feature weighting based on the GRLVQ method proposed by Hammer & Villmann (2002). The GRLVQ weighting is based on the distance d corr c to the nearest correctly labeled prototype w kcorr(c) and d incorr c to the nearest prototype w k incorr (c) with incorrect label:…”
Section: Resultsmentioning
confidence: 99%
“…The used feature selection method enables the cLVQ to separate co-occurring categories and allows a resource efficient representation of categories, which is beneficial for fast interactive and incremental learning of categories. Recently a variant of an embedded feature selection method for LVQ networks was proposed by Kietzmann et al (2008) based on the GRLVQ method (Hammer & Villmann, 2002) which was called iGRLVQ. This method iteratively removes features with small weighting values λ.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations