2019
DOI: 10.3758/s13428-018-1153-1
|View full text |Cite
|
Sign up to set email alerts
|

Parallel probability density approximation

Abstract: Probability Density Approximation (PDA) is a non-parametric method of calculating probability densities. When integrated into Bayesian estimation, it allows researchers to fit psychological processes for which analytic probability functions are unavailable, significantly expanding the scope of theories that can be quantitatively tested. PDA is, however, computationally intensive, requiring large numbers of Monte Carlo simulations to attain good precision. We introduce Parallel PDA (pPDA), a highly efficient im… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(16 citation statements)
references
References 46 publications
(93 reference statements)
0
14
0
Order By: Relevance
“…The ggdmc package is ready to assist applied researchers to exploit the advantages of hierarchical evidence accumulation modeling. Our software is built upon an earlier suite of R functions, Dynamic Models of Choice (Heathcote et al, 2019). We have provided a convenient interface to incorporate experimental designs.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The ggdmc package is ready to assist applied researchers to exploit the advantages of hierarchical evidence accumulation modeling. Our software is built upon an earlier suite of R functions, Dynamic Models of Choice (Heathcote et al, 2019). We have provided a convenient interface to incorporate experimental designs.…”
Section: Discussionmentioning
confidence: 99%
“…For example, we have recently fitted the piecewise LBA model (Holmes et al, 2016) with ggdmc to explore the method of using massive par-allel computation in likelihood simulations. This example shows ggdmc can accommodate cognitive models without analytic likelihood functions (Holmes, 2015;Lin, Heathcote, & Holmes, 2019;Turner & Sederberg, 2012). When this combined with approximate Bayesian computation (Beaumont, Zhang, & Balding, 2002), it can empower researchers to estimate model parameters with only process descriptions in hand.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Response time is then given by the distance to the threshold (θ) divided by the rate of accumulation toward the corresponding response option (δ all • v i ), plus a fixed non-decision time (τ). The likelihood of the data for a particular trial was obtained by generating 1000 simulated trials for every real trial and calculating the predicted accuracy (% correct) and distribution of response times by passing a kernel density estimator over the RT data to perform probability density approximation (Turner & Sederberg, 2014;Holmes, 2015;Lin et al, 2019).…”
Section: Figure 13mentioning
confidence: 99%
“…Instead, we use an approach for model fitting based on kernel density estimation to turn the simulated data into a truly continuous, 2-dimensional distribution of responses and response times. This method has been effectively used to approximate the likelihoods of several types of simulation-based models (Palestro et al, 2018;Turner & Van Zandt, 2012;Turner & Sederberg, 2014), is reasonably efficient especially with the addition of signal processing methods (Holmes, 2015;Lin et al, 2019), and can be easily adapted to a two-dimensional joint distribution like the one produced by the SCDM and GDM. For these models, we can simulate a large number of trials from the model, use the kernel density method to generate an approximate likelihood, and then impute the likelihood of each combination of response and response time in the observed data set.…”
Section: Model Likelihoodsmentioning
confidence: 99%