2016
DOI: 10.1109/tit.2016.2549043
|View full text |Cite
|
Sign up to set email alerts
|

A Simple Proof for the Optimality of Randomized Posterior Matching

Abstract: Posterior matching (PM) is a sequential horizon-free feedback communication scheme introduced by the authors, who also provided a rather involved optimality proof showing it achieves capacity for a large class of memoryless channels. Naghshvar et al considered a non-sequential variation of PM with a fixed number of messages and a random decision-time, and gave a simpler proof establishing its optimality via a novel Shannon-Jensen divergence argument. Another simpler optimality proof was given by Li and El Gama… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
12
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(12 citation statements)
references
References 13 publications
0
12
0
Order By: Relevance
“…Convex functionals on the probability simplex such as entropy, KL divergence, etc. have been shown to be useful for analyzing adaptive systems [2], [29], [30], [31] with the Bayes' rule dynamics on the posterior distribution. Particularly, the functional average log-likelihood [30] has shown its usefulness in analyzing the behaviour of the posterior in feedback coding systems [28], dynamic spectrum sensing [10], hypothesis testing [32], active learning [32], etc.…”
Section: Appendix B Average Log-likelihood and The Extrinsic Jensen-smentioning
confidence: 99%
“…Convex functionals on the probability simplex such as entropy, KL divergence, etc. have been shown to be useful for analyzing adaptive systems [2], [29], [30], [31] with the Bayes' rule dynamics on the posterior distribution. Particularly, the functional average log-likelihood [30] has shown its usefulness in analyzing the behaviour of the posterior in feedback coding systems [28], dynamic spectrum sensing [10], hypothesis testing [32], active learning [32], etc.…”
Section: Appendix B Average Log-likelihood and The Extrinsic Jensen-smentioning
confidence: 99%
“…Li and El Gamal [25] considered a non-sequential, fixed-rate, fixed-block-length feedback coding scheme for discrete memoryless channels (DMCs) when W = (0, 1) and introduced a random dither known to the encoder and decoder to provide a simple proof of achieving capacity. Also, Shayevitz and Feder [26] examined a randomized variant of the original PM scheme with a random dither and were able to provide a much simpler proof of optimality over general memoryless channels when W ∈ (0, 1). That is, as compared to the proofs above which are restricted to non-sequential variants with a fixed number of messages and only apply to DMCs, Shayevitz and Feder [26] used a random dither to provide a proof of optimality of the sequential, horizon-free, randomized posterior matching scheme over general memoryless channels.…”
Section: A Previous and Related Workmentioning
confidence: 99%
“…Also, Shayevitz and Feder [26] examined a randomized variant of the original PM scheme with a random dither and were able to provide a much simpler proof of optimality over general memoryless channels when W ∈ (0, 1). That is, as compared to the proofs above which are restricted to non-sequential variants with a fixed number of messages and only apply to DMCs, Shayevitz and Feder [26] used a random dither to provide a proof of optimality of the sequential, horizon-free, randomized posterior matching scheme over general memoryless channels. These settings where the encoder and decoder share a common source of randomness by way of a dither, however, may be undesirable in some situations (e.g.…”
Section: A Previous and Related Workmentioning
confidence: 99%
“…This upper bound is trivially an upper bound also for the feedback model, since non causal knowledge might increase the capacity only. Then, we construct a simple coding scheme for the feedback setting, inspired by the posterior matching principle [15]- [18]. The coding scheme enables both the encoder and the decoder to systematically reduce the size of the set of possible messages to a single message, which is then declared by the decoder as the correct message.…”
Section: Introductionmentioning
confidence: 99%