2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00307
|View full text |Cite
|
Sign up to set email alerts
|

Bayes-Factor-VAE: Hierarchical Bayesian Deep Auto-Encoder Models for Factor Disentanglement

Abstract: We propose a family of novel hierarchical Bayesian deep auto-encoder models capable of identifying disentangled factors of variability in data. While many recent attempts at factor disentanglement have focused on sophisticated learning objectives within the VAE framework, their choice of a standard normal as the latent factor prior is both suboptimal and detrimental to performance. Our key observation is that the disentangled latent variables responsible for major sources of variability, the relevant factors, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(15 citation statements)
references
References 10 publications
0
15
0
Order By: Relevance
“…To this end, we apply a one-toone dependency structure to the observed variables and the latent code, as shown in Figure 4a. Beyond a visual, qualitative validation, we quantify the disentanglement capacity of our approach by using the dSprites dataset, following the setups of [Kim and Mnih, 2017]. The disentanglement scores calculated for InfoGAN and our approach across various IRs are reported in Figure 4b.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To this end, we apply a one-toone dependency structure to the observed variables and the latent code, as shown in Figure 4a. Beyond a visual, qualitative validation, we quantify the disentanglement capacity of our approach by using the dSprites dataset, following the setups of [Kim and Mnih, 2017]. The disentanglement scores calculated for InfoGAN and our approach across various IRs are reported in Figure 4b.…”
Section: Methodsmentioning
confidence: 99%
“…InfoGAN [Chen et al, 2016], where the mutual information between the observed and the extracted latent feature subsets are maximized to realize the disentanglement. In addition to a set of subsequent studies such as [Tran et al, 2017;Lee et al, 2020], we are aware of two recent works [Kim et al, 2019;Beyazit et al, 2020] that also respect dependency structure among variables in the latent space with Bayesian treatment. However, these works were to extract various types of salient features while our task is imbalanced classification.…”
Section: Related Workmentioning
confidence: 99%
“…Neural 3D Point Cloud Generation. While 2D image generation has been widely investigated using GANS (Isola et al 2017;Zhu et al 2017) and VAES (Kingma and Welling 2014;Higgins et al 2016;Kim et al 2019b;Sohn, Lee, and Yan 2015), neural 3D point cloud generation has only been explored in recent years. Achlioptas et al (2018) first proposed the r-GAN to generate 3D point clouds, with fully connected layers as the generator.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, we additionally apply using long-tail distributions to model relevant factors, as the disentangled latent variables responsible for major sources of variability. Specifically, the VAE is extended to a hierarchical Bayesian model by introducing hyper-priors on the variances of Gaussian latent priors, while maintaining tractable learning and inference of the traditional VAEs [11]. For relevant factors, it is necessary to have p(z j ) different from N (0, 1).…”
Section: Representation With Improved Disentanglementmentioning
confidence: 99%