2021
DOI: 10.3390/math9090957
|View full text |Cite
|
Sign up to set email alerts
|

Measure of Similarity between GMMs by Embedding of the Parameter Space That Preserves KL Divergence

Abstract: In this work, we deliver a novel measure of similarity between Gaussian mixture models (GMMs) by neighborhood preserving embedding (NPE) of the parameter space, that projects components of GMMs, which by our assumption lie close to lower dimensional manifold. By doing so, we obtain a transformation from the original high-dimensional parameter space, into a much lower-dimensional resulting parameter space. Therefore, resolving the distance between two GMMs is reduced to (taking the account of the corresponding … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
14
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(14 citation statements)
references
References 35 publications
0
14
0
Order By: Relevance
“…The reduction in computational complexity is crucial in such systems, so various solu-tions are able to significantly reduce the computational complexity without significantly decreasing the recognition accuracy and addressing the mentioned issue were proposed. Some of these EMD-based measures, as can be seen in [25][26][27], use the graph Laplacianbased procedures, modified to operate on the Sym ++ (d) cone instead of R d , for example, the LPP [28] and NPE [29] dimensionality reduction, i.e., manifold learning techniques. Nevertheless, all mentioned embedding-based GMM similarity measures are based on the shallow ML models, as well as linear projection operators, which transform features from the original high-dimensional parameter space in which SPD representatives lie as elements of the Sym ++ (d) cone, to a low-dimensional Euclidean space of the transformed parameters, as used in the calculation formulas of GMM measures.…”
Section: Introductionmentioning
confidence: 99%
See 4 more Smart Citations
“…The reduction in computational complexity is crucial in such systems, so various solu-tions are able to significantly reduce the computational complexity without significantly decreasing the recognition accuracy and addressing the mentioned issue were proposed. Some of these EMD-based measures, as can be seen in [25][26][27], use the graph Laplacianbased procedures, modified to operate on the Sym ++ (d) cone instead of R d , for example, the LPP [28] and NPE [29] dimensionality reduction, i.e., manifold learning techniques. Nevertheless, all mentioned embedding-based GMM similarity measures are based on the shallow ML models, as well as linear projection operators, which transform features from the original high-dimensional parameter space in which SPD representatives lie as elements of the Sym ++ (d) cone, to a low-dimensional Euclidean space of the transformed parameters, as used in the calculation formulas of GMM measures.…”
Section: Introductionmentioning
confidence: 99%
“…Thus, it is an extension of LPP to the manifold of Sym ++ (d), utilized to compute the low-dimensional Euclidean representatives (vectors). Similarly, in [26], the NPE-like procedure, again involving Lovric's positive definite embeddings, results in obtaining the MAXDET optimization problem [30] that computes the neighborhood graph weights. Finally, the dimensionality reduction projection operator proposed in [26] is obtained in the same manner as in the LPP case.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations