2019
DOI: 10.1214/19-ejs1556
|View full text |Cite
|
Sign up to set email alerts
|

A preferential attachment model for the stellar initial mass function

Abstract: Accurate specification of a likelihood function is becoming increasingly difficult in many inference problems in astronomy. As sample sizes resulting from astronomical surveys continue to grow, deficiencies in the likelihood function lead to larger biases in key parameter estimates. These deficiencies result from the oversimplification of the physical processes that generated the data, and from the failure to account for observational limitations. Unfortunately, realistic models often do not yield an analytica… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 13 publications
(16 citation statements)
references
References 78 publications
0
14
0
Order By: Relevance
“…First, to select the sequence 1:T we implement an adaptive approach based on the accepted distances from the previous time step. That is, t is set to a particular quantile of the accepted distances from time step t−1 [32,[44][45][46][47][48]. Selecting too high of a quantile at which to shrink the tolerance could result in a decrease in computational efficiency because more iterations would be needed to shrink the tolerance to a small enough value.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…First, to select the sequence 1:T we implement an adaptive approach based on the accepted distances from the previous time step. That is, t is set to a particular quantile of the accepted distances from time step t−1 [32,[44][45][46][47][48]. Selecting too high of a quantile at which to shrink the tolerance could result in a decrease in computational efficiency because more iterations would be needed to shrink the tolerance to a small enough value.…”
Section: Methodsmentioning
confidence: 99%
“…The decreasing tolerance sequence's impact on the achievement of the global maximum is addressed in two ways in the proposed algorithm. First, we initialize the tolerance sequence by oversampling in the first iteration of the ABC-PMC algorithm [47]. Let N be the desired number of particles used to approximate the posterior distribution, then the initial tolerance, 1 , can be adaptively selected by sampling N init = lN draws from the prior, for some l ∈ Z + .…”
Section: Methodsmentioning
confidence: 99%
“…J=1 , the distances of the accepted particles from iteration t − 1 (Cisewski- Kehe et al, 2019;Ishida et al, 2015;Lenormand et al, 2013;Simola et al, 2019;Weyant et al, 2013), or (iii) adaptively selecting t based on some quantile of the effective sample size (ESS) values (Del Moral et al, 2012;Numminen et al, 2013). These approaches can lead to inefficient sampling as discussed below and demonstrated in the simulation study in Section 3.…”
Section: Selecting the Tolerance Sequence And Stopping Rulesmentioning
confidence: 99%
“…Approximate Bayesian Computation (ABC) provides a framework for inference in situations where the relationship between the data and the parameters does not lead to a tractable likelihood function, but where forward simulation of the data-generating process is possible. ABC has been used in many areas of science such as biology (Thornton and Andolfatto, 2006), epidemiology (McKinley et al, 2009;Numminen et al, 2013), ecology (Beaumont, 2010), population modeling (Toni et al, 2009), modeling the population effects of a vaccine (Corander et al, 2017), dark matter direct detection (Simola et al, 2019), and astronomy (Cameron and Pettitt, 2012;Cisewski-Kehe et al, 2019;Ishida et al, 2015;Schafer and Freeman, 2012;Weyant et al, 2013). The basic ABC algorithm (Pritchard et al, 1999;Rubin, 1984;Tavaré et al, 1997) can be explained in four steps.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation