2022
DOI: 10.48550/arxiv.2210.08448
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Resolving the Mixing Time of the Langevin Algorithm to its Stationary Distribution for Log-Concave Sampling

Abstract: Sampling from a high-dimensional distribution is a fundamental task in statistics, engineering, and the sciences. A canonical approach is the Langevin Algorithm, i.e., the Markov chain for the discretized Langevin Diffusion. This is the sampling analog of Gradient Descent. Despite being studied for several decades in multiple communities, tight mixing bounds for this algorithm remain unresolved even in the seemingly simple setting of log-concave distributions over a bounded domain. This paper completely charac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 42 publications
(70 reference statements)
0
2
0
Order By: Relevance
“…In the case where p f is (strongly) log-concave, that is, if f is (strongly) concave, convergence rates of Markov chain Monte Carlo (MCMC) sampling algorithms have been studied extensively. For example, good convergence rates in terms of the dimension d have been established for versions of the Langevin algorithm (Chewi et al, 2021;Altschuler and Talwar, 2022) and Hamiltonian Monte Carlo (Mangoubi and Vishnoi, 2018). Chewi et al (2022b) establish an algorithm with optimal convergence rate for the case d = 1, while not much is known about algorithm-independent lower bounds in other cases.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In the case where p f is (strongly) log-concave, that is, if f is (strongly) concave, convergence rates of Markov chain Monte Carlo (MCMC) sampling algorithms have been studied extensively. For example, good convergence rates in terms of the dimension d have been established for versions of the Langevin algorithm (Chewi et al, 2021;Altschuler and Talwar, 2022) and Hamiltonian Monte Carlo (Mangoubi and Vishnoi, 2018). Chewi et al (2022b) establish an algorithm with optimal convergence rate for the case d = 1, while not much is known about algorithm-independent lower bounds in other cases.…”
Section: Related Workmentioning
confidence: 99%
“…To study this, we need to make some assumptions on f . While efficient sampling algorithms for suitable classes of concave f are known, at least with access to gradients of f (Dwivedi et al, 2018;Mangoubi and Vishnoi, 2018;Chewi et al, 2021;Altschuler and Talwar, 2022), we are interested in larger classes of non-concave functions, which are defined in the following:…”
Section: Introductionmentioning
confidence: 99%