2016
DOI: 10.1007/978-3-319-46466-4_27
|View full text |Cite
|
Sign up to set email alerts
|

A Distance for HMMs Based on Aggregated Wasserstein Metric and State Registration

Abstract: We propose a framework, named Aggregated Wasserstein, for computing a dissimilarity measure or distance between two Hidden Markov Models with state conditional distributions being Gaussian. For such HMMs, the marginal distribution at any time spot follows a Gaussian mixture distribution, a fact exploited to softly match, aka register, the states in two HMMs. We refer to such HMMs as Gaussian mixture model-HMM (GMM-HMM). The registration of states is inspired by the intrinsic relationship of optimal transport a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 16 publications
0
7
0
Order By: Relevance
“…Observe that the point of view followed here in our paper is quite different from these works, since M W 2 is defined in a completely continuous setting as an optimal transport between GMMs with a restriction on couplings, following the same kind of approach as in [3]. The fact that this restriction leads to an explicit discrete formula, the same as the one proposed independently in [8,9] and [6,7], is quite striking. Observe also that thanks to the "identifiability property" of GMMs, this continuous formulation (4.1) is obviously non ambiguous, in the sense that the value of the minimium is the same whatever the parametrization of the Gaussian mixtures µ 0 and µ 1 .…”
Section: Definition Ofmentioning
confidence: 84%
See 2 more Smart Citations
“…Observe that the point of view followed here in our paper is quite different from these works, since M W 2 is defined in a completely continuous setting as an optimal transport between GMMs with a restriction on couplings, following the same kind of approach as in [3]. The fact that this restriction leads to an explicit discrete formula, the same as the one proposed independently in [8,9] and [6,7], is quite striking. Observe also that thanks to the "identifiability property" of GMMs, this continuous formulation (4.1) is obviously non ambiguous, in the sense that the value of the minimium is the same whatever the parametrization of the Gaussian mixtures µ 0 and µ 1 .…”
Section: Definition Ofmentioning
confidence: 84%
“…It happens that the discrete form (4.4), which can be seen as an aggregation of simple Wasserstein distances between Gaussians, has been recently proposed as an ingenious alternative to W 2 in the machine learning literature, both in [8,9] and [6,7]. Observe that the point of view followed here in our paper is quite different from these works, since M W 2 is defined in a completely continuous setting as an optimal transport between GMMs with a restriction on couplings, following the same kind of approach as in [3].…”
Section: Definition Ofmentioning
confidence: 99%
See 1 more Smart Citation
“…In this article, we follow the terminology of Chen et al. (2016) and call trueW2$\widetilde{W}_2$ MAW distance. The notation GW$GW$ was used by Delon and Desolneux (2020), which we do not adopt here to avoid confusion with the Gromov Wasserstein distance (Peyré et al.…”
Section: Integrating Clustering Results By the Maw Barycenter Of Gmmsmentioning
confidence: 99%
“…Constraining the class of joint distributions is a relaxation that has been done before (Bion-Nadal et al, 2019) due to the difficulty of considering arbitrary joint distributions. This metric, MW 2 , appears in a few different sources in the literature (Chen et al, 2016;2018;2019) and has been studied theoretically (Delon & Desolneux, 2020); recently, implementations of this quantity have emerged. 1…”
Section: Mwmentioning
confidence: 99%