2006
DOI: 10.1016/j.neucom.2005.12.007
|View full text |Cite
|
Sign up to set email alerts
|

Learning vector quantization: The dynamics of winner-takes-all algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
11
0
5

Year Published

2008
2008
2024
2024

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 30 publications
(17 citation statements)
references
References 7 publications
1
11
0
5
Order By: Relevance
“…offset of the clusters σ , variance within the clusters υ σ , learning rate η, and for NG, the rank function parameter λ. As shown in [5], this method of analysis is in good agreement with large scale Monte Carlo simulations of the same learning systems for dimensionality as low as N = 200.…”
Section: Analysis Of Learning Dynamicssupporting
confidence: 70%
See 1 more Smart Citation
“…offset of the clusters σ , variance within the clusters υ σ , learning rate η, and for NG, the rank function parameter λ. As shown in [5], this method of analysis is in good agreement with large scale Monte Carlo simulations of the same learning systems for dimensionality as low as N = 200.…”
Section: Analysis Of Learning Dynamicssupporting
confidence: 70%
“…This successful approach has also been reviewed in [8,16], among others. The extension of the theoretical analysis of simple (WTA-based) vector quantization with two prototypes and two clusters introduced in an earlier work [5] is not straightforward. Additional prototypes and clusters introduce more complex interactions in the system that can result in radically different behaviors.…”
Section: Introductionmentioning
confidence: 98%
“…We only mention that unsupervised prototype based learning has been treated in complete analogy to the above [43,44]. Technically, it reduces to the consideration of modulation functions which do not depend on the cluster or class label.…”
Section: Dynamics Of Prototype Based Learningmentioning
confidence: 99%
“…The basic competitive WTA Vector Quantization training would be represented, for instance, by the modulation function f s = Θ(d −s −d +s ) which always moves the winning prototype closer to the data. The training prescription can be interpreted as a stochastic gradient descent of a cost function, the so-called quantization error [44]. The exchange and permutation symmetry of prototypes in unsupervised training results in interesting effects which resemble the plateaus discussed in multilayered neural networks, cf.…”
Section: Dynamics Of Prototype Based Learningmentioning
confidence: 99%
“…This problem relates to unexpected behavior and instabilities of training. It has been shown that already slight variations of the basic LVQ learning scheme yield quite different results [2,3]. Variants of LVQ which can be derived from an explicit cost function are particularly interesting.…”
Section: Introductionmentioning
confidence: 99%