2015
DOI: 10.1109/tit.2015.2405537
|View full text |Cite
|
Sign up to set email alerts
|

Optimum Tradeoffs Between the Error Exponent and the Excess-Rate Exponent of Variable-Rate Slepian–Wolf Coding

Abstract: We analyze the optimal tradeoff between the error exponent and the excess-rate exponent for variable-rate Slepian-Wolf codes. In particular, we first derive upper (converse) bounds on the optimal error and excess-rate exponents, and then lower (achievable) bounds, via a simple class of variable-rate codes which assign the same rate to all source blocks of the same type class. Then, using the exponent bounds, we derive bounds on the optimal rate functions, namely, the minimal rate assigned to each type class, n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
4

Relationship

6
3

Authors

Journals

citations
Cited by 24 publications
(24 citation statements)
references
References 34 publications
0
24
0
Order By: Relevance
“…It is shown that variable-rate Slepian-Wolf coding can outperform fixed-rate Slepian-Wolf coding in terms of rate-error tradeoff. Finally, we would like to mention that Theorem 1 has been generalized by Weinberger and Merhav in their recent paper on the optimal tradeoff between the error exponent and the excess-rate exponent of variable-rate Slepian-Wolf coding [19]. …”
Section: Discussionmentioning
confidence: 99%
“…It is shown that variable-rate Slepian-Wolf coding can outperform fixed-rate Slepian-Wolf coding in terms of rate-error tradeoff. Finally, we would like to mention that Theorem 1 has been generalized by Weinberger and Merhav in their recent paper on the optimal tradeoff between the error exponent and the excess-rate exponent of variable-rate Slepian-Wolf coding [19]. …”
Section: Discussionmentioning
confidence: 99%
“…Obviously, if V ′ is degenerate (e.g., equal to a fixed v ∈ V with probability one), then we are back to (11). For another extreme case, if the channel is clean, Q is uniform, and the channel alphabet is very large, thenÎ(X; Y ′ ) =Ĥ(X) = log |X | is large as well, and then R ∧Î(X ′ ; Y ) is dominated by R. In this case, we recover the SW random binning error exponent (see, e.g., [32] and references therein).…”
Section: Both Achieve E(r Q)mentioning
confidence: 84%
“…At the same time, considering the analogy between channel coding and Slepian-Wolf (SW) source coding, it is not surprising that universal schemes for SW decoding, like the minimum entropy (ME) decoder, have also been derived, first, by Csiszár and Körner [4, Exercise 3.1.6], and later further developed by others in various directions, see, e.g., [1], [6], [12], [27], [28], [32].…”
Section: Introductionmentioning
confidence: 99%
“…It appears then that correct estimation of S is essentially equivalent to correct estimation of X, as in ordinary Slepian-Wolf decoding [6] (see also [15] and references therein), where there is no secret key at all (or alternatively, R s → ∞). Indeed, the Slepian-Wolf coding component of the joint source-channel coding system, analyzed in [10, Section IV] under the generalized likelihood decoder, contributes the very same error exponent as asserted in Theorem 1.…”
Section: False-reject Error Analysismentioning
confidence: 99%