2017 IEEE National Aerospace and Electronics Conference (NAECON) 2017
DOI: 10.1109/naecon.2017.8268804
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic approximation for learning rate optimization for generalized relevance learning vector quantization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 31 publications
0
5
0
Order By: Relevance
“…General Relevance Learning Vector Quantization Improved (GRLVQI) is an extension of Kohonen's Learning Vector Quantization (LVQ) [33] which is in the family of self-organizing Neural Network (NN) approaches using nearest Prototype Vector (PV) optimization. As a classifier, LVQ associates a PV with a given class (typically multiple PVs per class).…”
Section: Post-classification Grlvqimentioning
confidence: 99%
See 2 more Smart Citations
“…General Relevance Learning Vector Quantization Improved (GRLVQI) is an extension of Kohonen's Learning Vector Quantization (LVQ) [33] which is in the family of self-organizing Neural Network (NN) approaches using nearest Prototype Vector (PV) optimization. As a classifier, LVQ associates a PV with a given class (typically multiple PVs per class).…”
Section: Post-classification Grlvqimentioning
confidence: 99%
“…When an observation is input to the network, the PV closest to the observation "fires" and the prediction accuracy is based on whether or not the firing PV(s) are associated with the correct class for the observation. GRLVQI extends LVQ by incorporating cost functions, learning methods, and logic and operation improvements [33].…”
Section: Post-classification Grlvqimentioning
confidence: 99%
See 1 more Smart Citation
“…In doing so, this extends the initial work of [18] by allowing GRLVQI to find optimal settings in an iterative fashion for its four continuous hyperparameters: gradient descent learning rate ( ), relevance learning rate ( ), conscience rate 1 ( ), and conscience rate 2 ( ). Additionally, the work of [13] is extended by creating a more efficient approach to optimizing GRLVQI settings and by creating a more robust approach to GRLVQI optimization by averaging replication results.…”
Section: Introductionmentioning
confidence: 90%
“…[16] [17], and 2) stochastic estimation, e.g. [18]. Although various experimental design approaches have been used to find optimal hyperparameter settings, see [19], these do not avoid the limitations in specifying the high and low settings to examine.…”
Section: Introductionmentioning
confidence: 99%