1996
DOI: 10.1214/aop/1039639365
|View full text |Cite
|
Sign up to set email alerts
|

Bounding $\bar{d}$-distance by informational divergence: a method to prove measure concentration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

6
216
0

Year Published

1997
1997
2012
2012

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 251 publications
(225 citation statements)
references
References 7 publications
6
216
0
Order By: Relevance
“…Given a metric space (E, d) equipped with its Borel σ field, and 1 ≤ p < +∞, the L p Wasserstein distance between two probability measures µ and ν on E is defined as where the infimum runs over all coupling π of (µ, ν), see Villani [31] for an extensive study of such quantities. A probability measure µ is then said to satisfy the transportation-entropy inequality W p H(C), where C > 0 is some constant, if for all probability measure ν ( Marton [25] has first shown how W 1 H inequality implies Gaussian concentration of measure and Talagrand, via a tensorization argument, established that the standard Gaussian measure, in any dimension, satisfies W 2 H(C) with the sharp constant C = 1. However, if W 1 H is completely characterized via a practical Gaussian integrability criterion (see [14,9]), W 2 H is much more difficult to describe.…”
Section: Introduction and Main Resultsmentioning
confidence: 99%
“…Given a metric space (E, d) equipped with its Borel σ field, and 1 ≤ p < +∞, the L p Wasserstein distance between two probability measures µ and ν on E is defined as where the infimum runs over all coupling π of (µ, ν), see Villani [31] for an extensive study of such quantities. A probability measure µ is then said to satisfy the transportation-entropy inequality W p H(C), where C > 0 is some constant, if for all probability measure ν ( Marton [25] has first shown how W 1 H inequality implies Gaussian concentration of measure and Talagrand, via a tensorization argument, established that the standard Gaussian measure, in any dimension, satisfies W 2 H(C) with the sharp constant C = 1. However, if W 1 H is completely characterized via a practical Gaussian integrability criterion (see [14,9]), W 2 H is much more difficult to describe.…”
Section: Introduction and Main Resultsmentioning
confidence: 99%
“…While it is uncertain whether our approach could recover these abstract principles, the deviation inequalities themselves follow rather easily from it. On the abstract inequalities themselves, let us mention here the recent a l t e rnate approach b y K. Marton (1995aMarton ( ), (1995b and Dembo (1995) (see also Dembo and Zeitouni (1995)) based on information inequalities and coupling in which the concept of entropy a l s o p l a ys a crucial role. Let us also observe that hypercontraction methods were used in Kwapie n and Szulga (1991) to study integrability of norms of sums of independent v ector valued random variables.…”
Section: Introduction Deviation Inequalities For Convex Functionsmentioning
confidence: 99%
“…There is a large body of literature on other sources of concentration-of-measure inequalities: these include logarithmic Sobolev inequalities and the Herbst argument [3,13,15], the entropy method [5,6,18], and information-theoretic methods [9,24]. Of particular interest are those concentration results that apply to infinite-dimensional settings [20].…”
Section: Other Concentration Inequalitiesmentioning
confidence: 99%