2004
DOI: 10.1214/105051604000000369
|View full text |Cite
|
Sign up to set email alerts
|

Optimal scaling of MaLa for nonlinear regression

Abstract: We address the problem of simulating efficiently from the posterior distribution over the parameters of a particular class of nonlinear regression models using a Langevin-Metropolis sampler. It is shown that as the number N of parameters increases, the proposal variance must scale as N −1/3 in order to converge to a diffusion.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
27
0
1

Year Published

2008
2008
2020
2020

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(28 citation statements)
references
References 15 publications
0
27
0
1
Order By: Relevance
“…context involved looking at restricted classes of models, see e.g. [17,16,7]. The most recent contributions in this still-open research direction have looked at target distributions in high-dimensions defined as changes of measure from Gaussian laws ( [12,45,49]).…”
Section: Beyond Iid Targetsmentioning
confidence: 99%
“…context involved looking at restricted classes of models, see e.g. [17,16,7]. The most recent contributions in this still-open research direction have looked at target distributions in high-dimensions defined as changes of measure from Gaussian laws ( [12,45,49]).…”
Section: Beyond Iid Targetsmentioning
confidence: 99%
“…To the best of our knowledge, the only paper to consider the optimal scaling for the MALA algorithm for nonproduct targets is [9], in the context of nonlinear regression. In [9] the target measure has a structure similar to that of the mean field models studied in statistical mechanics and hence behaves asymptotically like a product measure when the dimension goes to infinity. Thus the diffusion limit obtained in [9] is finite dimensional.…”
Section: Harvard University Warwick University and Warwick Universitymentioning
confidence: 99%
“…The same method of proof has also been applied to derive optimal scaling results for other types of Markov chain Monte Carlo algorithms, such as the Metropolis-adjusted Langevin algorithm; see Roberts & Rosenthal (1998, 2001, Breyer, Piccioni & Scarlatti (2002), Christensen, Roberts & Rosenthal (2003), Neal & Roberts (2006). In this paper, we consider Metropolis algorithms only; we do not give an exhaustive account of the literature about the Metropolis-adjusted Langevin algorithm.…”
Section: History Of Optimal Scalingmentioning
confidence: 99%