2019
DOI: 10.48550/arxiv.1906.11537
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

'In-Between' Uncertainty in Bayesian Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

4
28
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(32 citation statements)
references
References 0 publications
4
28
0
Order By: Relevance
“…. The Bayesian predictive of the model with a better marginal likelihood has increasing uncertainty away from the data as often desired (Foong et al, 2019). The parameters found by our online algorithm give rise to a Bayesian predictive without further tuning of parameters after training which is required when using the Laplace approximation (Ritter et al, 2018;Kristiadi et al, 2020).…”
Section: C1 Illustrative Examplesmentioning
confidence: 95%
See 3 more Smart Citations
“…. The Bayesian predictive of the model with a better marginal likelihood has increasing uncertainty away from the data as often desired (Foong et al, 2019). The parameters found by our online algorithm give rise to a Bayesian predictive without further tuning of parameters after training which is required when using the Laplace approximation (Ritter et al, 2018;Kristiadi et al, 2020).…”
Section: C1 Illustrative Examplesmentioning
confidence: 95%
“…We compare our method to cross-validation on eight UCI (Dua & Graff, 2017) regression datasets following Hernández-Lobato & Adams (2015) and Foong et al (2019). In this setup, each dataset is split into 90% training and 10% testing data and a neural network with a single hidden layer, 50 neurons, and a ReLU activation is used.…”
Section: Uci Regressionmentioning
confidence: 99%
See 2 more Smart Citations
“…In this section we provide more details on each of the modelling steps along the example of a synthetic one-dimensional non-linear regression dataset. We use the setup from (Foong et al, 2019) with two clusters of inputs x 1 ∼ U[−1, −0.7], x 2 ∼ U[0.5, 1] and y ∼ N (cos(4x + 0.8), 0.1 2 ). The shaded area indicates up to three standard deviations from the predictive mean.…”
Section: Tyxe By Example: Non-linear Regressionmentioning
confidence: 99%