2020
DOI: 10.1093/imaiai/iaaa040
|View full text |Cite
|
Sign up to set email alerts
|

Clustering, factor discovery and optimal transport

Abstract: The clustering problem, and more generally latent factor discovery or latent space inference, is formulated in terms of the Wasserstein barycenter problem from optimal transport. The objective proposed is the maximization of the variability attributable to class, further characterized as the minimization of the variance of the Wasserstein barycenter. Existing theory, which constrains the transport maps to rigid translations, is extended to affine transformations. The resulting non-parametric clustering algorit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…Further, for relaxing the objective and stablizing the end-to-end training we use entropic optimal transport [51] distance defined by: (8) where µ = n i=1 δ xi the measure of data instances and µ a = k j=1 δ aj the measure of anchors, d(•, •) is euclidean distance metric, π is the transport plan, Π(•, •) represent set of total transport permutation combination, ≥ 0 is the regularization coefficient. As a relevant metric for assigning samples to the best-fitted anchors, optimal transport distance can lead to a more reliable assignment than nearest neighbor search [52]. Therefore, by optimizing G θ (•) in minimizing L ot = OT θ (X s , N s )+OT θ (X t , N t ) in both source and target domain independently, the extracted features in both domains are more compactly arranged around their radial structures, the structure faithfulness requirement can be indirectly achieved.…”
Section: Radial Structure Enhancementmentioning
confidence: 99%
“…Further, for relaxing the objective and stablizing the end-to-end training we use entropic optimal transport [51] distance defined by: (8) where µ = n i=1 δ xi the measure of data instances and µ a = k j=1 δ aj the measure of anchors, d(•, •) is euclidean distance metric, π is the transport plan, Π(•, •) represent set of total transport permutation combination, ≥ 0 is the regularization coefficient. As a relevant metric for assigning samples to the best-fitted anchors, optimal transport distance can lead to a more reliable assignment than nearest neighbor search [52]. Therefore, by optimizing G θ (•) in minimizing L ot = OT θ (X s , N s )+OT θ (X t , N t ) in both source and target domain independently, the extracted features in both domains are more compactly arranged around their radial structures, the structure faithfulness requirement can be indirectly achieved.…”
Section: Radial Structure Enhancementmentioning
confidence: 99%