Proceedings. 1991 IEEE International Symposium on Information Theory
DOI: 10.1109/isit.1991.695195
|View full text |Cite
|
Sign up to set email alerts
|

Construction Of Asymptotically Good Low-rate Error-correcting Codes Through Pseudo-random Graphs

Abstract: A new technique, based on the pseudo-random properties of certain graphs, known as expanders, is used to obtain new simple explicit constructions of asymptotically good codes. In one of the constructions (construction C1 below), the expanders are used to enhance Justesen codes by replicating, shuffling and then regrouping the code coordinates. For a

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
149
0

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 60 publications
(150 citation statements)
references
References 0 publications
1
149
0
Order By: Relevance
“…The area fell dormant for quite a few years, and coding theorists concentrated on algebraic techniques, until graph codes stormed again into consciousness in the 80's and 90's, with important works of Tanner [Tan81], Alon et al [ABN92], Sipser and Spielman [SS96], and more. This particular direction culminated in the paper of Spielman [Spi96], which achieved linear time encoding and decoding for asymptotically good codes.…”
Section: Error Correcting Codesmentioning
confidence: 99%
“…The area fell dormant for quite a few years, and coding theorists concentrated on algebraic techniques, until graph codes stormed again into consciousness in the 80's and 90's, with important works of Tanner [Tan81], Alon et al [ABN92], Sipser and Spielman [SS96], and more. This particular direction culminated in the paper of Spielman [Spi96], which achieved linear time encoding and decoding for asymptotically good codes.…”
Section: Error Correcting Codesmentioning
confidence: 99%
“…A very similar function is studied in [6], and most of the techniques applied there can be used in our case as well, as we briefly describe below. Besides these techniques, we need a new result, stated in proposition 4 below.…”
Section: The Smallest Possible -Bias Spacesmentioning
confidence: 99%
“…A lower bound for m(n, ) can be derived-(as is also mentioned in [6] for the case of fixed )-from the McEliece-Rodemich-Rumsey-Welch bound (see [23], page 559). Although the proof of this bound, as described, e.g., in [23] is given only for the case of a fixed (when the length of the code tends to infinity), the same proof can be extended to a more general case, by studying the asymptotic behavior of the smallest roots of the corresponding Krawtchouk polynomials.…”
Section: The Smallest Possible -Bias Spacesmentioning
confidence: 99%
“…The way Hamming spheres pack in high-dimensional space, even for p much larger than (1 − R)/2 (and in fact for p ≈ 1 − R) there exist codes of rate R (over a larger alphabet Σ) for which the following holds: for most error patterns e that corrupt fewer than a fraction p of symbols, when a codeword c gets distorted into z by the error pattern e, there will be no codeword besides c within Hamming distance pn of z. 1 Thus, for typical noise patterns one can hope to correct many more errors than the above limit faced by the worst-case error pattern. However, since we assume a worst-case noise model, we do have to deal with bad received words such as r. List decoding provides an elegant formulation to deal with worst-case errors without compromising the performance for typical noise patterns -the idea is that in the worst-case, the decoder may output multiple answers.…”
Section: List Decoding: Context and Motivation 111mentioning
confidence: 99%