2020
DOI: 10.48550/arxiv.2005.07186
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 0 publications
0
8
0
Order By: Relevance
“…Although a few different priors have been proposed for BNNs, these were mostly designed for specific tasks (Atanov et al, 2018;Ghosh & Doshi-Velez, 2017;Overweg et al, 2019;Nalisnick, 2018;Cui et al, 2020;Hafner et al, 2020) or relied heavily on non-standard inference methods (Sun et al, 2019;Ma et al, 2019;Karaletsos & Bui, 2020;Pearce et al, 2020). Moreover, while many interesting distributions have been proposed as variational posteriors for BNNs (Louizos & Welling, 2017;Swiatkowski et al, 2020;Dusenberry et al, 2020;Aitchison et al, 2020), these approaches have still used Gaussian priors.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Although a few different priors have been proposed for BNNs, these were mostly designed for specific tasks (Atanov et al, 2018;Ghosh & Doshi-Velez, 2017;Overweg et al, 2019;Nalisnick, 2018;Cui et al, 2020;Hafner et al, 2020) or relied heavily on non-standard inference methods (Sun et al, 2019;Ma et al, 2019;Karaletsos & Bui, 2020;Pearce et al, 2020). Moreover, while many interesting distributions have been proposed as variational posteriors for BNNs (Louizos & Welling, 2017;Swiatkowski et al, 2020;Dusenberry et al, 2020;Aitchison et al, 2020), these approaches have still used Gaussian priors.…”
Section: Related Workmentioning
confidence: 99%
“…However, the most common choice of the prior for BNN weights is the simplest one: the isotropic Gaussian. Isotropic Gaussians are used across almost all fields of Bayesian deep learning, ranging from variational inference (Blundell et al, 2015;Dusenberry et al, 2020), to sampling-based inference (Zhang et al, 2019), and even to infinite networks (Lee et al, 2017;Garriga-Alonso et al, 2019). This is troubling, since isotropic Gaussian priors are almost certainly not the best choice.…”
Section: Introductionmentioning
confidence: 99%
“…The strengths of these two approaches need to be combined to remedy their common problems. Dusenberry et al [273] devised a rank-1 parameterization of BNNs and also utilized mixture approximate posteriors to capture multiple modes. Rank-1 BNNs demonstrated stateof-the-art performance across out of distribution variants, calibration on the test sets, accuracy and log-likelihood.…”
Section: Other Uq Techniquesmentioning
confidence: 99%
“…Variational methods for BNNs (Peterson, 1987;Hinton and Van Camp, 1993;Blundell et al, 2015) differ in their choices of prior and belief distributions and inference algorithm. This includes hierarchical priors (Louizos and Welling, 2016;Ghosh and Doshi-Velez, 2017), data priors (Louizos and Welling, 2016;Hafner et al, 2019b;Sun et al, 2019), flexible posteriors (Louizos and Welling, 2016;Sun et al, 2017;Louizos and Welling, 2017;Zhang et al, 2018;Chang et al, 2019), low rank posteriors (Izmailov et al, 2018;Dusenberry et al, 2020), and improved inference algorithms (Wen et al, 2018;Immer et al, 2020). BNNs have been leveraged for RL for robustness (Okada et al, 2020;Tran et al, 2019) and exploration (Houthooft et al, 2016;Azizzadenesheli et al, 2018).…”
Section: Variational Inferencementioning
confidence: 99%