2014
DOI: 10.1214/14-ejs921
|View full text |Cite
|
Sign up to set email alerts
|

On the stick-breaking representation of $\sigma$-stable Poisson-Kingman models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…It is well-known that the Dirichlet process with concentration parameter M has a stick-breaking representation of the form (32) with V i [188]. However, such representations for the general classes of random measure considered here are more recent developments, and the densities of the V i 's are quite complicated [52,50].…”
Section: Marginal Sampling Methodsmentioning
confidence: 97%
“…It is well-known that the Dirichlet process with concentration parameter M has a stick-breaking representation of the form (32) with V i [188]. However, such representations for the general classes of random measure considered here are more recent developments, and the densities of the V i 's are quite complicated [52,50].…”
Section: Marginal Sampling Methodsmentioning
confidence: 97%
“…An algorithm for slice sampling the sequence W was provided therein. Favaro et al [11] showed that, under certain assumptions on the parameter α, these sticks can be directly constructed with beta and gamma random variables.…”
Section: Stick-breaking Representationsmentioning
confidence: 99%
“…Fortunately, we can get an i.i.d. draw from the above due to an identity in distribution given by Favaro et al [8] for the usual stick breaking weights for any prior in this class such that σ " u v where u ă v are coprime integers. Then we just reparameterize it back to obtain the new size-biased weight, see Algorithm 3 in the supplementary material for details.…”
Section: Example Of Classes Of Poisson-kingman Priorsmentioning
confidence: 99%
“…The reason for this is that in the σ " 0.5 case there are readily available random number generators which do not increase the computational cost. In contrast, in the σ " 0.3 case, a rejection sampler method is needed every time a new size-biased weight is sampled which increases the computational cost, see Favaro et al [8] for details. Even so, in most cases, we outperform both marginal and conditional MCMC schemes in terms of running times and in all cases, in terms of ESS.…”
Section: Performance Assesssmentmentioning
confidence: 99%