2010
DOI: 10.1198/jcgs.2010.10039
|View full text |Cite
|
Sign up to set email alerts
|

On the Utility of Graphics Cards to Perform Massively Parallel Simulation of Advanced Monte Carlo Methods

Abstract: We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the ad… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
235
0
1

Year Published

2010
2010
2020
2020

Publication Types

Select...
8
1
1

Relationship

2
8

Authors

Journals

citations
Cited by 256 publications
(239 citation statements)
references
References 27 publications
0
235
0
1
Order By: Relevance
“…More recent work combines the use of multiple chains with adaptive MCMC in an attempt to use these multiple sources of information to learn an appropriate proposal distribution (12,13). Sometimes, specific MCMC algorithms are directly amenable to parallelization, such as independent Metropolis−Hastings (14) or slice sampling (15), as indeed are some statistical models via careful reparameterization (16) or implementation on specialist hardware, such as graphics processing units (GPUs) (17,18); however, these approaches are often problem specific and not generally applicable. For problems involving large amounts of data, parallelization may in some cases also be possible by partitioning the data and analyzing each subset using standard MCMC methods simultaneously on multiple machines (19).…”
mentioning
confidence: 99%
“…More recent work combines the use of multiple chains with adaptive MCMC in an attempt to use these multiple sources of information to learn an appropriate proposal distribution (12,13). Sometimes, specific MCMC algorithms are directly amenable to parallelization, such as independent Metropolis−Hastings (14) or slice sampling (15), as indeed are some statistical models via careful reparameterization (16) or implementation on specialist hardware, such as graphics processing units (GPUs) (17,18); however, these approaches are often problem specific and not generally applicable. For problems involving large amounts of data, parallelization may in some cases also be possible by partitioning the data and analyzing each subset using standard MCMC methods simultaneously on multiple machines (19).…”
mentioning
confidence: 99%
“…Some studies such as Lee et al (2010) and Aldrich et al (2011) document massive speed gains, from 35 up to 500, of the GPU code with respect to single-threaded CPU code. Considering these results, it can be concluded that our GPU speed performance could be increased substantially, this observation is right and wrong at the same time.…”
Section: Resultsmentioning
confidence: 99%
“…The traditionally used MCMC algorithms are sequential and therefore not amendable to simple parallelization, except in a few special cases. In fMRI, this can be circumvented by running many serial MCMC algorithms in parallel (Lee, Yau, Giles, Doucet, & Holmes, 2010), e.g. one for each voxel time series.…”
Section: Bayesian Statisticsmentioning
confidence: 99%