2017
DOI: 10.1214/17-sts576rej
|View full text |Cite
|
Sign up to set email alerts
|

You Just Keep on Pushing My Love over the Borderline: A Rejoinder

Abstract: The entire reason that we wrote this paper was to provide a concrete object around which to focus a broader discussion about prior choice and we are extremely grateful to the editorial team at Statistical Science for this opportunity. David Dunson (DD), Jim Hodges (JH), Christian Robert, Judith Rousseau (RR) and James Scott (JS) have taken this discussion in diverse and challenging directions and over the next few pages, we will try to respond to the main points they have raised. "IF I COULD LOVE, I WOULD LOVE… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 11 publications
(13 reference statements)
0
2
0
Order By: Relevance
“…where we have used a Type-2 Gumbel distribution as an approximation of the penalised complexity prior for the shape parameter in the negative binomial distribution [15,16] 1 NegBin(x|µ, ) =…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…where we have used a Type-2 Gumbel distribution as an approximation of the penalised complexity prior for the shape parameter in the negative binomial distribution [15,16] 1 NegBin(x|µ, ) =…”
Section: Methodsmentioning
confidence: 99%
“…This gives, where p ( R t≤ 0 ) is probability distribution for the effective reproduction number before our time window of interest. Finally, we assume that, and where we have used a Type-2 Gumbel distribution as an approximation of the penalised complexity prior for the shape parameter in the negative binomial distribution [15, 16] and α , β , and λ are known constants encoding our prior knowledge (for the choice of λ see Appendix C). By combining Equation 3, Equation 5 and Equation 7 into Equation 1, we arrive at the distribution, …”
Section: Methodsmentioning
confidence: 99%
“…Instead of defining a prior on s directly we let θ = log s and give θ the prior πfalse(θfalse)=7θ2||ψ(θ1)θ2log(θ1)2ψ(θ1)exp72log(θ1)2ψ(θ1),where ψ is the digamma function. This is a penalised complexity prior (Simpson et al., 2017a) described in (Simpson et al., 2017b), which essentially controls the model complexity compared to a Poisson distribution.…”
Section: Models and Methodsmentioning
confidence: 99%
“…The final parameter that needs a prior is the overdispersion parameter s. Instead of defining a prior on s directly we let θ= log s and give θ the prior where ψ is the digamma function. This is a penalised complexity prior (Simpson et al, 2017a) described in (Simpson et al, 2017b), which essentially controls the model complexity compared to a Poisson distribution.…”
Section: Hyperpriors and Estimationmentioning
confidence: 99%