2014 14th International Symposium on Communications and Information Technologies (ISCIT) 2014
DOI: 10.1109/iscit.2014.7011900
|View full text |Cite
|
Sign up to set email alerts
|

The acceleration of turbo decoder on the newest GPGPU of Kepler architecture

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
0
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 8 publications
0
0
0
Order By: Relevance
“…Furthermore, our design overcomes a range of significant challenges related to topological mapping, data rearrangement and memory allocation. 3) We implement a PIVI Log-BCJR LTE turbo decoder on a GPGPU as a benchmarker, achieving a throughput of up to 8.2 Mbps, while facilitating the same BER as our SIMD FPTD having a window size of N /P = 32, which is comparable to the throughputs of 6.8 Mbps and 4 Mbps, demonstrated in the pair of state-of-theart benchmarkers of [13] and [17], respectively. 4) We show that when used for implementing the LTE turbo decoder, the proposed SIMD FPTD achieves a degree of parallelism that is between 4 and 24 times higher, representing a processing throughput improvement between 2.3 to 9.2 times as well as a latency reduction between 2 to 8.2 times.…”
Section: Introductionmentioning
confidence: 89%
See 4 more Smart Citations
“…Furthermore, our design overcomes a range of significant challenges related to topological mapping, data rearrangement and memory allocation. 3) We implement a PIVI Log-BCJR LTE turbo decoder on a GPGPU as a benchmarker, achieving a throughput of up to 8.2 Mbps, while facilitating the same BER as our SIMD FPTD having a window size of N /P = 32, which is comparable to the throughputs of 6.8 Mbps and 4 Mbps, demonstrated in the pair of state-of-theart benchmarkers of [13] and [17], respectively. 4) We show that when used for implementing the LTE turbo decoder, the proposed SIMD FPTD achieves a degree of parallelism that is between 4 and 24 times higher, representing a processing throughput improvement between 2.3 to 9.2 times as well as a latency reduction between 2 to 8.2 times.…”
Section: Introductionmentioning
confidence: 89%
“…However, [12] and [42] demonstrated that turbo decoding is the most processor-intensive operation of basestation processing, requiring at least 64% of the processing resources used for receiving a message frame, where the remaining 36% includes the FFT, demapping, demodulation and other operations. Motivated by this, a number of previous research efforts [13], [14], [17], [18], [21], [38], [39], [43]- [45] have proposed GPGPU implementations dedicated to turbo decoding, as shown in Figure 3. Additionally, the authors of [27]- [30], [36], [40], [46], and [47] have proposed GPGPU implementations of LDPC decoders.…”
Section: Gpu Computing and Implementationsmentioning
confidence: 99%
See 3 more Smart Citations