2019
DOI: 10.1109/tit.2019.2894519
|View full text |Cite
|
Sign up to set email alerts
|

Minimum-Entropy Couplings and Their Applications

Abstract: Given two discrete random variables X and Y, with probability distributions p = (p1, . . . , pn) and q = (q1, . . . , qm), respectively, denote by C(p, q) the set of all couplings of p and q, that is, the set of all bivariate probability distributions that have p and q as marginals. In this paper, we study the problem of finding a joint probability distribution in C(p, q) of minimum entropy (equivalently, a coupling that maximizes the mutual information between X and Y ), and we discuss several situations wher… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
29
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 19 publications
(30 citation statements)
references
References 56 publications
1
29
0
Order By: Relevance
“…Then, for any Q X that is not a point mass, 4 i.e., the contraction coefficient for the χ 2 -divergence is the minimal contraction coefficient among all f -divergences with f satisfying the above conditions. Remark 1: A weaker version of (15) was presented in [18,Proposition II.6.15] in the general alphabet setting, and the result in (15) was obtained in [54,Theorem 3.3] for finite alphabets.…”
Section: A Preliminaries and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Then, for any Q X that is not a point mass, 4 i.e., the contraction coefficient for the χ 2 -divergence is the minimal contraction coefficient among all f -divergences with f satisfying the above conditions. Remark 1: A weaker version of (15) was presented in [18,Proposition II.6.15] in the general alphabet setting, and the result in (15) was obtained in [54,Theorem 3.3] for finite alphabets.…”
Section: A Preliminaries and Related Workmentioning
confidence: 99%
“…Proof: See Appendix F. 15 Tsallis entropy was introduced in [68] as a generalization of the Shannon entropy (similarly to the Rényi entropy [56]), and it was applied to statistical physics in [68].…”
Section: Remarkmentioning
confidence: 99%
“…where the vector ⃗ is called the aggregation of ⃗ s. [17] Relation (18) represents the quantum prediction of relation (16) but without quantum memory. The violation of the relation ⃗ s (+) ≺ ⃗ implies a violation of relation (17), which then implies the quantum steering in a bipartite state. [15] The correlation matrix of a bipartite state AB is defined as We consider a typical mixed and entangled state [19]…”
Section: Uncertainty Relation With Infinite Number Of Observablesmentioning
confidence: 99%
“…For example, the Bell non-locality, quantum steering, [16] and non-separability may be somehow distinguished by stronger correlations reached by optimal joint measurements in bipartite system. In statistical inference, a typical problem is to identify the extremal joint distribution that maximizes the correlation for given marginals, [17] which is closely related to infer a maximal correlation for given joint measurements.…”
Section: Introductionmentioning
confidence: 99%
“…Shannon entropy [27] and minimum entropy [28] are effective tools to depict the statistical characteristics of random numbers. Shannon entropy can quantitatively evaluate the effective information in a random sequence, and the independence and uncertainty of each bit.…”
Section: A Select the Least Significant Bitsmentioning
confidence: 99%