2021
DOI: 10.48550/arxiv.2104.03436
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Synthetic Likelihood in Misspecified Models: Consequences and Corrections

David T. Frazier,
Christopher Drovandi,
David J. Nott

Abstract: We analyse the behaviour of the synthetic likelihood (SL) method when the model generating the simulated data differs from the actual data generating process. One of the most common methods to obtain SL-based inferences is via the Bayesian posterior distribution, with this method often referred to as Bayesian synthetic likelihood (BSL). We demonstrate that when the model is misspecified, the BSL posterior can be poorly behaved, placing significant posterior mass on values of the model parameters that do not re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 18 publications
1
2
0
Order By: Relevance
“…The posterior distribution allows to quantify the uncertainty in our model parameter estimates, in contrast to previous work using genetic algorithms 27,28 . As has been observed in other contexts [29][30][31][32] , we found that SBI failed for our HH model and our data set due to a small but systematic mismatch between the data and the model, which could not be easily remedied by standard modifications to the HH model. We developed an algorithm that introduces noise to the summary statistics during training, which allowed it to perform reliable inference despite the model misspecification.…”
Section: Introductionsupporting
confidence: 72%
See 2 more Smart Citations
“…The posterior distribution allows to quantify the uncertainty in our model parameter estimates, in contrast to previous work using genetic algorithms 27,28 . As has been observed in other contexts [29][30][31][32] , we found that SBI failed for our HH model and our data set due to a small but systematic mismatch between the data and the model, which could not be easily remedied by standard modifications to the HH model. We developed an algorithm that introduces noise to the summary statistics during training, which allowed it to perform reliable inference despite the model misspecification.…”
Section: Introductionsupporting
confidence: 72%
“…1b, c) -on average 18% of posterior-sampled parameter sets produced simulations with at least one undefined summary feature, such as undefined latency due to a lack of action potentials (Table S1, standard NPE). Therefore, we further investigated the reason for this failure and found that the poor performance of the SBI pipeline was due to a systematic mismatch between the electrophysiological recordings and simulations from the model, a phenomenon recently observed also in other settings [29][30][31][32] .…”
Section: /18 2 Resultsmentioning
confidence: 99%
See 1 more Smart Citation