2019
DOI: 10.3150/18-bej1073
|View full text |Cite
|
Sign up to set email alerts
|

High-dimensional Bayesian inference via the unadjusted Langevin algorithm

Abstract: We consider in this paper the problem of sampling a high-dimensional probability distribution π having a density w.r.t. the Lebesgue measure on R d , known up to a normalization constant x → π(x) = e −U(x) / R d e −U(y) dy. Such problem naturally occurs for example in Bayesian inference and machine learning. Under the assumption that U is continuously differentiable, ∇U is globally Lipschitz and U is strongly convex, we obtain non-asymptotic bounds for the convergence to stationarity in Wasserstein distance of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

7
230
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 171 publications
(239 citation statements)
references
References 36 publications
(85 reference statements)
7
230
2
Order By: Relevance
“…To the best of the authors' knowledge, these are the first such results which provide a higher rate of convergence in Wasserstein distance compared to the existing literature. As for the total variation distance, [11] proves that the rate of convergence is 1 for the case of a strongly convex U , whereas our result yields the same convergence rate without assuming convexity.…”
Section: Introductionsupporting
confidence: 57%
See 3 more Smart Citations
“…To the best of the authors' knowledge, these are the first such results which provide a higher rate of convergence in Wasserstein distance compared to the existing literature. As for the total variation distance, [11] proves that the rate of convergence is 1 for the case of a strongly convex U , whereas our result yields the same convergence rate without assuming convexity.…”
Section: Introductionsupporting
confidence: 57%
“…The corresponding numerical scheme of the Langevin SDE obtained by using the Euler-Maruyama (Milstein) method yields the unadjusted Langevin algorithm (ULA), known also as the Langevin Monte Carlo (LMC), which has been well studied in the literature. For a globally Lipschitz ∇U , the non-asymptotic bounds in total variation and Wasserstein distance between the n-th iteration of the ULA algorithm and π have been provided in [11], [12] and [10]. As for the case of superlinear ∇U , the difficulty arises from the fact that ULA is unstable (see [23]), and its Metropolis adjusted version, MALA, loses some of its appealing properties as discussed in [7] and demonstrated numerically in [2].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Under mild technical conditions, the Langevin diffusion admits π as its unique invariant distribution. We consider the sampling method based on the Euler-Maruyama discretization of (13). This scheme referred to as unadjusted Langevin algorithm (ULA), defines the discrete-time Markov chain (X k ) k≥0 given by…”
Section: Application To Markov Chain Monte Carlomentioning
confidence: 99%