2003 IEEE 58th Vehicular Technology Conference. VTC 2003-Fall (IEEE Cat. No.03CH37484) 2003
DOI: 10.1109/vetecf.2003.1285102
|View full text |Cite
|
Sign up to set email alerts
|

Design of VLSI implementation-oriented LDPC codes

Abstract: Abstract-Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent errorcorrecting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of an LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this paper, following the joint code and decoder design philosophy, we propose a semirandom design scheme to construct the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
23
0

Year Published

2005
2005
2012
2012

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(24 citation statements)
references
References 14 publications
0
23
0
Order By: Relevance
“…Additional problems arise from the fact that LDPC codes of random structure also require large block sizes for good error correction performance, leading to prohibitively large chip sizes. Despite these bottlenecks, there were several attempts to come up with high throughput implementations [2] and implementation-oriented code constructions [50,51]. The drawbacks of most of these proposed techniques are that the code-design and VLSI implementation issues are considered in a somewhat decoupled manner, resulting in increased chip dimension and reduced data throughput.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Additional problems arise from the fact that LDPC codes of random structure also require large block sizes for good error correction performance, leading to prohibitively large chip sizes. Despite these bottlenecks, there were several attempts to come up with high throughput implementations [2] and implementation-oriented code constructions [50,51]. The drawbacks of most of these proposed techniques are that the code-design and VLSI implementation issues are considered in a somewhat decoupled manner, resulting in increased chip dimension and reduced data throughput.…”
Section: Introductionmentioning
confidence: 99%
“…The drawbacks of most of these proposed techniques are that the code-design and VLSI implementation issues are considered in a somewhat decoupled manner, resulting in increased chip dimension and reduced data throughput. As an example, the standard-cell based approach adopted in [2] has a die area of 7.5Â7 mm for a rate one half code; the design strategy followed in that and other reports is based on choosing some known random or structured coding scheme, and developing a good parallel, serial, or partly-parallel implementation for it [2,25,50,51]. Some of these strategies rely on utilizing complicated optimization techniques that fail to be efficient for code lengths beyond several thousands.…”
Section: Introductionmentioning
confidence: 99%
“…The staggered schedule and à posteriori probability decoding proposed in [9] also lead to early saturation in the decoding performance within a few iterations, which may be undesirable in some applications. Another approach named as pipelineparallel implementation discussed in [2] [10][11][12][13] is based on grouping bit nodes into n 1groups, where n 1is the parallelization factor. Then, each group will be assigned to a bit processor.…”
Section: Introductionmentioning
confidence: 99%
“…However, an effective hardware realisation is still an open issue, since problems of gate complexity and especially routing congestion arise in fullyparallel solutions [3,5], while throughput and memory collisions are the bottleneck of mixed serial-parallel or semiparallel solutions. In order to overcome these problems, the current trend is the construction of "implementationoriented" codes [4,13,21,22] and more recently a growing interest has been put in the study of different schedules of the elaborations of the belief-propagation (BP) algorithm, as to speed up the decoder convergence and use a smaller number of iterations for an increased throughput.…”
Section: Introductionmentioning
confidence: 99%