2019
DOI: 10.48550/arxiv.1911.08411
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Mixed-curvature Variational Autoencoders

Abstract: It has been shown that using geometric spaces with non-zero curvature instead of plain Euclidean spaces with zero curvature improves performance on a range of Machine Learning tasks for learning representations. Recent work has leveraged these geometries to learn latent variable models like Variational Autoencoders (VAEs) in spherical and hyperbolic spaces with constant curvature. While these approaches work well on particular kinds of data that they were designed for e.g. tree-like data for a hyperbolic VAE, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(21 citation statements)
references
References 20 publications
0
21
0
Order By: Relevance
“…There are many isotropy invariant potentials on the sphere. Natural choices include the uniform density (which is invariant to all rotations) and the wrapped distribution with the center at v [29,35]. For our experiments, we use the uniform density.…”
Section: Invariant Potential Parameterizationmentioning
confidence: 99%
“…There are many isotropy invariant potentials on the sphere. Natural choices include the uniform density (which is invariant to all rotations) and the wrapped distribution with the center at v [29,35]. For our experiments, we use the uniform density.…”
Section: Invariant Potential Parameterizationmentioning
confidence: 99%
“…A mismatch between the latent prior distribution of a VAE and the data manifold structure can lead to over regularization and poor data representation. Other models incorporate more complex latent structures such as hyperspheres [12], tori [13,14], and hyperboloids [15,16,17]. These models have demonstrated their suitability for certain datasets by choosing a prior that is the best match out of a predefined set of candidates.…”
Section: Table 1: Comparison Of Vae Techniquesmentioning
confidence: 99%
“…However, it has recently been shown that, for datasets with hierarchical structure, there are clear advantages to instead using latent variables that have support on the Poincaré disk. This observation has motivated the Poincaré VAE (PVAE, Mathieu et al (2019)), the Poincaré Wasserstein auto-encoder (Ovinnikov, 2019), Adversarial PVAE (Dai et al, 2020) and Mixed-curvture VAE (Skopek et al, 2019) among others.…”
Section: Poincaré Diskmentioning
confidence: 99%