2021
DOI: 10.48550/arxiv.2104.05915
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Revisiting Bayesian Autoencoders with MCMC

Rohitash Chandra,
Mahir Jain,
Manavendra Maharana
et al.

Abstract: Autoencoders gained popularity in the deep learning revolution given their ability to compress data and provide dimensionality reduction. Although prominent deep learning methods have been used to enhance autoencoders, the need to provide robust uncertainty quantification remains a challenge. This has been addressed with variational autoencoders so far. Bayesian inference via MCMC methods have faced limitations but recent advances with parallel computing and advanced proposal schemes that incorporate gradients… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 69 publications
(100 reference statements)
0
2
0
Order By: Relevance
“…Tran et al [9] observed that BAE-generated data have higher uncertainty than the reconstructed data. Chandra et al [26] studied the BAE with Markov Chain Monte Carlo (MCMC) sampling for dimensionality reduction and classification tasks.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Tran et al [9] observed that BAE-generated data have higher uncertainty than the reconstructed data. Chandra et al [26] studied the BAE with Markov Chain Monte Carlo (MCMC) sampling for dimensionality reduction and classification tasks.…”
Section: Background and Related Workmentioning
confidence: 99%
“…A major limitation for MCMC sampling technique is high computational complexity for sampling from the posterior distribution [31,32]. There recently there has been much progress in MCMC sampling via the use of gradient-based proposals and parallel computing in Bayesian deep learning [33,34,35]. However, these have been mostly limited to model parameter (weights) uncertainty quantification rather than quantifying uncertainties in data or addressing class imbalanced problems.…”
Section: Introductionmentioning
confidence: 99%