2013
DOI: 10.1007/978-3-642-38679-4_35
|View full text |Cite
|
Sign up to set email alerts
|

Border-Sensitive Learning in Kernelized Learning Vector Quantization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2013
2013
2016
2016

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…However, gradient descent learning is only preserved for non-vanishing values θ . For 0 < θ 1 good approximations can be achieved, which realizes a border sensitive classification learning (BSCL) for GLVQ (BS-GLVQ) (Kästner et al 2013). For this setting only those data samples v contribute significantly to the prototype learning, which are located nearby the class borders.…”
Section: Classification By Learning Vector Quantizationmentioning
confidence: 99%
“…However, gradient descent learning is only preserved for non-vanishing values θ . For 0 < θ 1 good approximations can be achieved, which realizes a border sensitive classification learning (BSCL) for GLVQ (BS-GLVQ) (Kästner et al 2013). For this setting only those data samples v contribute significantly to the prototype learning, which are located nearby the class borders.…”
Section: Classification By Learning Vector Quantizationmentioning
confidence: 99%
“…Thus, the active set determines the border sensitivity of the GLVQmodel. In consequence, small Θ-values realize border sensitive learning for GLVQ and prototypes are certainly forced to move to the class borders [48].…”
Section: Robustness Classification Certainty and Border Sensitivitymentioning
confidence: 99%
“…In a similar way all quantities of a confusion matrix (see Fig. (4)) and combinations thereof can be obtained as a cost function for a GLVQ-like classifier keeping the idea of prototype learning [48]. In particular, many statistical quantities used in medicine, bioinformatics and social sciences for classification assessment like precision π and recall ρ defined by π = T P T P + F P and ρ = T P T P + F N can be explicitly optimized by a GLVQ-like classifier.…”
Section: Generative Versus Discriminative Models Asymmetric Error Asmentioning
confidence: 99%
“…For comparative purposes, they also evaluated the performance of a OVA SVM and a k-NN model which exhibited poorer accuracies (93.7% and 90.6% respectively). In the same way, a sparse kernelized matrix Learning Vector Quantization (LVQ) model was employed in [Kästner et al, 2013] for the HAR dataset classification achieving 96.23% test accuracy, only differing 0.17% against the first approach. Their method was a variant of LVQ in which a metric adaptation with only one prototype vector for each class was proposed.…”
Section: Dataset Publicationmentioning
confidence: 99%