2018 IEEE 10th International Symposium on Turbo Codes &Amp; Iterative Information Processing (ISTC) 2018
DOI: 10.1109/istc.2018.8625377
|View full text |Cite
|
Sign up to set email alerts
|

25 Years of Turbo Codes: From Mb/s to beyond 100 Gb/s

Abstract: In this paper, we demonstrate how the development of parallel hardware architectures for turbo decoding can be continued to achieve a throughput of more than 100 Gb/s. A new, fully pipelined architecture shows better error correcting performance for high code rates than the fully parallel approaches known from the literature. This is demonstrated by comparing both architectures for a frame size K = 128 LTE turbo code and a frame size K = 128 turbo code with parity puncture constrained interleaving. To the best… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(39 citation statements)
references
References 33 publications
(36 reference statements)
0
33
0
Order By: Relevance
“…Fully Parallel MAP (FPMAP): This decoder architecture is the extreme case of the PMAP with a sub-block size reduced to 1 trellis stage in combination with a shuffled decoding schedule [6], [25]. It has been shown to achieve a throughput of 15 Gb/s, an order of magnitude more than previously published PMAP implementations [5], but suffers from a reduced BER performance for high code rates [13].…”
Section: The Step To 100 Gb/s Via Iteration Unrollingmentioning
confidence: 99%
See 1 more Smart Citation
“…Fully Parallel MAP (FPMAP): This decoder architecture is the extreme case of the PMAP with a sub-block size reduced to 1 trellis stage in combination with a shuffled decoding schedule [6], [25]. It has been shown to achieve a throughput of 15 Gb/s, an order of magnitude more than previously published PMAP implementations [5], but suffers from a reduced BER performance for high code rates [13].…”
Section: The Step To 100 Gb/s Via Iteration Unrollingmentioning
confidence: 99%
“…In addition, iterative processing required for decoding LDPC and Turbo codes negatively impacts achievable throughput. Bridging the gap between the performance metrics of current state-of-the-art decoders and the requirements identified by the EPIC project can be achieved by fully unrolling the (iterative) decoding onto a single pipeline [9], [11]- [13].…”
Section: Introductionmentioning
confidence: 99%
“…The same simulation was exploited to estimate the relation Ω D [k] for the matrix D . Such a trend is difficult to derive by simply exploiting Equations (11) and (16). Nevertheless, it possible to realize an estimator Ω D [k] through a machine learning approach.…”
Section: Impact Of the Parallelism Degree On The Data Rate: Case Studymentioning
confidence: 99%
“…In the last decades, CTCs have been the object of interest thanks to their high efficiency, which permits to transmit by using a data rate close the channel capacity boundary [9,10]. In particular, to supply the request of a higher transmission data rate [11] of modern telecommunication systems, research focused on increasing the efficiency of CTCs [1, 12,13] or on improving their architecture at implementation level [12,14,15].…”
Section: Introductionmentioning
confidence: 99%
“…unrolling and pipelining) at the algorithmic and the architectural level to parallelize the decoding at MAP and Turbo code decoder level. For a detailed overview and discussion of such a highly parallel decoder we refer to [24]. Figure 1 shows the corresponding layout of the decoder that achieves 102 Gbit/s information bit throughput for a block length of 128 bit on a low V t 28 nm FD-SOI technology under worst case PVT conditions, running with 800 MHz and performing 4 decoding iterations.…”
Section: Turbo Code Decodingmentioning
confidence: 99%