2014 IEEE International Symposium on Information Theory 2014
DOI: 10.1109/isit.2014.6874815
|View full text |Cite
|
Sign up to set email alerts
|

Exact common information

Abstract: This paper introduces the notion of exact common information, which is the minimum description length of the common randomness needed for the exact distributed generation of two correlated random variables (X, Y ). We introduce the quantity G(X; Y ) = minX→W →Y H(W ) as a natural bound on the exact common information and study its properties and computation. We then introduce the exact common information rate, which is the minimum description rate of the common randomness for the exact generation of a 2-DMS (X… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

2
84
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 62 publications
(86 citation statements)
references
References 8 publications
(13 reference statements)
2
84
0
Order By: Relevance
“…, X n ) follows a prescribed distribution exactly. The distributed randomness simulation problem is to find the common randomness W * with the minimum average description length R * , referred to in [1] as the exact common information between X n , and the scheme that achieves this exact common information. Since W can be represented by an optimal prefix-free code, e.g., a Huffman code or the code in [2] if the alphabet is infinite, the average description length R * can be upper bounded as H(W ) ≤ R * < H(W ) + 1, whereW minimizes H(W ).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…, X n ) follows a prescribed distribution exactly. The distributed randomness simulation problem is to find the common randomness W * with the minimum average description length R * , referred to in [1] as the exact common information between X n , and the scheme that achieves this exact common information. Since W can be represented by an optimal prefix-free code, e.g., a Huffman code or the code in [2] if the alphabet is infinite, the average description length R * can be upper bounded as H(W ) ≤ R * < H(W ) + 1, whereW minimizes H(W ).…”
Section: Introductionmentioning
confidence: 99%
“…Hence in this paper we will focus on investigating W that minimizes H(W ) instead of R * . The above setting was introduced in [1] for two discrete random variables and the minimum entropy of W , referred to as the common entropy, is given by G(X 1 ; X 2 ) = min…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In these settings, we are concerned with the statistics of how two (or more) random variables X 1 , X 2 , called predictors, jointly or separately specify/predict another random variable Y , called a target random variable. This focus on a target random variable is in contrast to Shannon's mutual information which quantifies statistical dependence between two random variables, and various notions of common information, e.g., [6][7][8].…”
Section: Introductionmentioning
confidence: 99%
“…dit implements the vast majority of information measure defined in the literature, including entropies (Shannon (Cover and Thomas 2006), Renyi, Tsallis), multivariate mutual informations (co-information (Bell 2003) (McGill 1954), total correlation (Watanabe 1960), dual total correlation(Te Sun 1980) (Han 1975) (Abdallah and Plumbley 2012), CAEKL mutual information (Chan et al 2015)), common informations (Gács-Körner(Gács and Körner 1973) (Tyagi, Narayan, and Gupta 2011), Wyner (Wyner 1975)(W. Liu, Xu, and Chen 2010), exact (Kumar, Li, and El Gamal 2014), functional, minimal sufficient statistic), and channel capacity (Cover and Thomas 2006). It includes methods of studying joint distributions including information diagrams, connected informations (Schneidman et al 2003) (Amari 2001), marginal utility of information (Allen, Stacey, and Bar-Yam 2017), and the complexity profile(Y.…”
mentioning
confidence: 99%