1981
DOI: 10.1109/tit.1981.1056281
|View full text |Cite
|
Sign up to set email alerts
|

Graph decomposition: A new key to coding theorems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
290
0

Year Published

1999
1999
2021
2021

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 183 publications
(293 citation statements)
references
References 10 publications
3
290
0
Order By: Relevance
“…Equation (16) resembles the mismatched decoding error exponent of [20] for maximum-metric decoding, and (17) resembles the corresponding LM rate of Csiszár-Körner-Hui [20], [21]. More precisely, the latter are written as in (16)- (17) with the terms E[log q(X, Y )]−E P [log q(X, Y )]…”
Section: Tightness Via Primal-domain Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Equation (16) resembles the mismatched decoding error exponent of [20] for maximum-metric decoding, and (17) resembles the corresponding LM rate of Csiszár-Körner-Hui [20], [21]. More precisely, the latter are written as in (16)- (17) with the terms E[log q(X, Y )]−E P [log q(X, Y )]…”
Section: Tightness Via Primal-domain Analysismentioning
confidence: 99%
“…Combining these observations with (20), recalling that P XY denotes the joint type of (x, y), and using the standard property of types [10, Ch. 2]…”
Section: Tightness Via Primal-domain Analysismentioning
confidence: 99%
“…8, which employed the method of types 10,11,22,33 . In the present case, second-order (Markov) types rather than the usual types are used.…”
Section: B the Methods Of Typesmentioning
confidence: 99%
“…From each of the d n−k cosets of L ⊥ in F 2n , select a vector that minimizes H c (M x ), i.e., a vector x satisfying H c (M x ) ≤ H c (M y ) for any y in the coset. This selection uses the idea of the minimum entropy decoder known in the classical information theory literature 33 .…”
Section: Proof Of Theoremmentioning
confidence: 99%
“…The material in this paper was presented in part at the Information Theory and Applications conference, San Diego, and also at the Conference on Information Sciences and Systems, the John Hopkins University, Baltimore, 2009. with α-decoding. We present our results in the format given in [25]. This technique also gives upper bounds on the ensemble averages.…”
Section: Introductionmentioning
confidence: 99%