2021
DOI: 10.48550/arxiv.2106.11905
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dangers of Bayesian Model Averaging under Covariate Shift

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 0 publications
1
6
0
Order By: Relevance
“…Finally, Izmailov et al (2021a) show that BNNs have issues under covariate shift, due to the posterior not contracting sufficiently along some directions in the parameter space. The same issue occurs when BNNs are applied to extremely small datasets, which may affect the results on data subsampling presented in Noci et al (2021).…”
Section: Discussionmentioning
confidence: 97%
See 3 more Smart Citations
“…Finally, Izmailov et al (2021a) show that BNNs have issues under covariate shift, due to the posterior not contracting sufficiently along some directions in the parameter space. The same issue occurs when BNNs are applied to extremely small datasets, which may affect the results on data subsampling presented in Noci et al (2021).…”
Section: Discussionmentioning
confidence: 97%
“…However, there are practical challenges to the adoption of Bayesian deep learning. For example, Izmailov et al (2021a) shows that Bayesian neural networks can profoundly degrade in performance under a wide range of relatively minor distribution shifts -behaviour which could affect applicability on virtually any real-world problem, since train and test rarely come from exactly the same distribution. While their EmpCov prior provides a partial remedy, there is still much work to be done.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…A popular way to overcome these blemishes is by quantifying (epistemic) uncertainty by aggregating multiple predictions by networks in the Bayesian Model Averaging framework (Jeffreys, 1998;Wilson & Izmailov, 2020), using variational methods (Gal & Ghahramani, 2016;Blundell et al, 2015), ensembling (Lakshminarayanan et al, 2017) or mixtures of the two (Pearce et al, 2020;Wilson & Izmailov, 2020). Nevertheless, many of these methods have been shown not to produce diverse predictions (Wilson & Izmailov, 2020;Fort et al, 2019) and to deliver subpar performance and potentially misleading uncertainty estimates under distributional shift (Ovadia et al, 2019;Masegosa, 2019;Wenzel et al, 2020;Izmailov et al, 2021a;b), raising doubts about their efficacy.…”
Section: Related Workmentioning
confidence: 99%