[Proceedings 1992] IJCNN International Joint Conference on Neural Networks
DOI: 10.1109/ijcnn.1992.287101
|View full text |Cite
|
Sign up to set email alerts
|

LVQPAK: A software package for the correct application of Learning Vector Quantization algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
70
0

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 105 publications
(70 citation statements)
references
References 2 publications
0
70
0
Order By: Relevance
“…However, little has been published about LVQ algorithmic settings beyond 1) the general hierarchy of 0 ≤ ( ) ≤ ( ) ≤ 1 for relevance-based LVQ methods [54], 2) specific guidelines for specific applications, e.g. [55], [56], and 3) learning rate convergence methods, e.g. [55].…”
Section: Grlvqi Algorithmic Settingsmentioning
confidence: 99%
“…However, little has been published about LVQ algorithmic settings beyond 1) the general hierarchy of 0 ≤ ( ) ≤ ( ) ≤ 1 for relevance-based LVQ methods [54], 2) specific guidelines for specific applications, e.g. [55], [56], and 3) learning rate convergence methods, e.g. [55].…”
Section: Grlvqi Algorithmic Settingsmentioning
confidence: 99%
“…Prototype-based supervised algorithms. Learning Vector Quantization (LVQ) is one of the well-known nearest prototype learning algorithms [25]. LVQ can be considered to be a supervised clustering algorithm, in which each weight vector can be interpreted as a cluster center.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Learning Vector Quantization (LVQ) was developed by Kohonen et al [6]. The idea of this algorithm is to find a natural grouping in a set of data.…”
Section: Learning Vector Quantizationmentioning
confidence: 99%
“…In the case of primary sequence as pre-processing a binary input must be used, otherwise the big difference in input values (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20) would have a great influence on the result. We have therefore used a 95 nodes input layer where each sliding window index is represented by a 5 nodes binary signal.…”
Section: Artificial Neural Networkmentioning
confidence: 99%