2019
DOI: 10.1007/s12243-019-00727-5
|View full text |Cite
|
Sign up to set email alerts
|

Low-latency and high-throughput software turbo decoders on multi-core architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…Since the emulation of FF operations requires a large number of instructions 21 (thus slowing down the performance of the processor), some researchers decided to use extremely powerful GPPs and/or graphical processing units (GPUs). However, even this very expensive approach has not proven to be applicable [22][23][24][25] in future communication networks (Table 3). Unlike FF-based codes, IECCs are perfectly suited for implementation on 64-bit processors.…”
Section: Discussionmentioning
confidence: 99%
“…Since the emulation of FF operations requires a large number of instructions 21 (thus slowing down the performance of the processor), some researchers decided to use extremely powerful GPPs and/or graphical processing units (GPUs). However, even this very expensive approach has not proven to be applicable [22][23][24][25] in future communication networks (Table 3). Unlike FF-based codes, IECCs are perfectly suited for implementation on 64-bit processors.…”
Section: Discussionmentioning
confidence: 99%
“…Each one of the twin decoders run in the opposite direction to deliver the LLRs faster to the other decoders. Parallel decodable turbo codes are presented in [13]- [17]. The information bits here are divided into multiple groups, each of which is encoded separately and then multiplexed.…”
Section: Deploying Parallel Decodingmentioning
confidence: 99%
“…Many SDR elementary blocks have been optimized for Intel® and ARM® CPUs. High throughput results have been achieved on GPUs; 19‐23 latency results are is still too high however to meet real time constraints and to compete with CPU implementations 22,24‐33 . This is mainly due to data transfers between the host (CPUs) and the device (GPUs), and to the nature of GPU designs, which are not optimized for latency efficiency.…”
Section: Introductionmentioning
confidence: 99%