2015
DOI: 10.1016/j.neucom.2013.11.048
|View full text |Cite
|
Sign up to set email alerts
|

Kernelized vector quantization in gradient-descent learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0
1

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 33 publications
(15 citation statements)
references
References 66 publications
0
14
0
1
Order By: Relevance
“…A more elegant way is to replace the Euclidean distance directly by the kernel distance d κ generated from the kernel κ. If the kernel distance is d κ (v, w k ) is differentiable like the RBF kernel (23), we can immediately plugin this into GLVQ obtaining a kernel variant, which works exactly in the same Hilbert space [57]. Obviously, this trick could also be applied to RSLVQ.…”
Section: Beyond the Euclidean World -Glvq With Non-standard Dissimilamentioning
confidence: 99%
“…A more elegant way is to replace the Euclidean distance directly by the kernel distance d κ generated from the kernel κ. If the kernel distance is d κ (v, w k ) is differentiable like the RBF kernel (23), we can immediately plugin this into GLVQ obtaining a kernel variant, which works exactly in the same Hilbert space [57]. Obviously, this trick could also be applied to RSLVQ.…”
Section: Beyond the Euclidean World -Glvq With Non-standard Dissimilamentioning
confidence: 99%
“…First integration attempts of kernel distances into GLVQ were suggested in [72] and [80] using various approximation techniques to determine the gradient learning in the kernel associated Hilbert space H. An elementary alternative is the utilization of differentiable universal kernels [103] based on the theory of universal kernels [61,88,94]. This approach allows the adaptation of the prototypes in the original data space but equipped with the kernel distance generated by the differentiable kernel, i.e.…”
Section: Appropriate Metrics and Metric Adaptation For Vector Datamentioning
confidence: 99%
“…This approach allows the adaptation of the prototypes in the original data space but equipped with the kernel distance generated by the differentiable kernel, i.e. the metric space (V, d κΦ ) [104,103]. Hence, such a distance is also differentiable according to (36), see For example, exponential kernels are universal, which can be used together with the above mentioned Minkowski-p-norms and the linear data mapping (35), revealing The natural extension of vector quantization is matrix quantization.…”
Section: Appropriate Metrics and Metric Adaptation For Vector Datamentioning
confidence: 99%
“…There are several methods that have been used to optimize the kernel and select the regression parameters, such as cross validation learning [5,6], gradient descent learning [7,8], evolutionary learning [9,10], and positive semidefinite programming learning [11,12]. The model of support vector regression and the selection of kernel parameters are relatively few, and it primarily uses the grid search cross validation method and evolutionary method.…”
Section: Introductionmentioning
confidence: 99%