2019
DOI: 10.48550/arxiv.1910.00551
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Efficient Sampling Algorithm for Non-smooth Composite Potentials

Abstract: We consider the problem of sampling from a density of the form p(x) ∝ exp(−f (x) − g(x)), where f : R d → R is a smooth and strongly convex function and g : R d → R is a convex and Lipschitz function. We propose a new algorithm based on the Metropolis-Hastings framework, and prove that it mixes to within TV distance ε of the target density in at most O(d log(d/ε)) iterations. This guarantee extends previous results on sampling from distributions with smooth log densities (g = 0) to the more general composite n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(12 citation statements)
references
References 59 publications
0
12
0
Order By: Relevance
“…(in Theorem 3.4), which improves the dependence on accuracy ǫ. In Mou et al (2019), the Metropolis-adjusted Langevin algorithm is levaraged with a proximal sampling oracle to remove the polynomial dependence on the accuracy ǫ (in total variation distance) and achieve a Ω d log( 1 ǫ ) convergence time for a related composite posterior distribution. Unfortunately, an additional dimension dependent factor is always introduced into the overall convergence rate.…”
Section: Related Workmentioning
confidence: 99%
“…(in Theorem 3.4), which improves the dependence on accuracy ǫ. In Mou et al (2019), the Metropolis-adjusted Langevin algorithm is levaraged with a proximal sampling oracle to remove the polynomial dependence on the accuracy ǫ (in total variation distance) and achieve a Ω d log( 1 ǫ ) convergence time for a related composite posterior distribution. Unfortunately, an additional dimension dependent factor is always introduced into the overall convergence rate.…”
Section: Related Workmentioning
confidence: 99%
“…RGO is a key algorithmic ingredient used in [17] together with the alternating sampling framework to improve the iteration-complexity bounds for various sampling algorithms. Examples of a convex function g that admits an computationally efficient RGO have been presented in [26,32], including coordinate-separable regularizers, ℓ 1 -norm, and group Lasso.…”
Section: Problem Formulation and Alternating Sampling Frameworkmentioning
confidence: 99%
“…Over the last few years, several new algorithms and theoretical results in sampling with nonsmooth potentials have been established. In [26], sampling for non-smooth composite potentials is considered. The algorithm needs the proximal sampling oracle that samples from the target potential regularized by a large isotropic quadratic term as well as computes the corresponding partition function.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…It is possible to remove the bias by applying the Metropolis filter (accept-reject step) in each iteration; this has a geometric interpretation as projection in total variation (TV) distance [5]. With the Metropolis filter, it is possible to prove the algorithm still converges exponentially fast in discrete time, and obtain an iteration complexity of O(log 1 δ ) to reach error δ in TV distance with warm start and under various conditions such as strong logconcavity, isoperimetry, or distant dissipativity [7,21,37,41]. However, if we want convergence in KL divergence-which is stronger-then Metropolis filter does not work because it makes the distributions singular (have point masses).…”
Section: Introductionmentioning
confidence: 99%