2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC) 2019
DOI: 10.1109/spawc.2019.8815400
|View full text |Cite
|
Sign up to set email alerts
|

DEEPTURBO: Deep Turbo Decoder

Abstract: Present-day communication systems routinely use codes that approach the channel capacity when coupled with a computationally efficient decoder. However, the decoder is typically designed for the Gaussian noise channel, and is known to be sub-optimal for non-Gaussian noise distribution. Deep learning methods offer a new approach for designing decoders that can be trained and tailored for arbitrary channel statistics. We focus on Turbo codes, and propose (DEEPTURBO), a novel deep learning based architecture for … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
41
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 48 publications
(42 citation statements)
references
References 30 publications
0
41
0
1
Order By: Relevance
“…Utilizing concurrent NN designs in addition with learning the code properties, via the composed syndrome, achieved performance improvement. Further contributions to the field lie in [10] and [11], where neural decoders for convolutional codes were proposed together with befitting training methodologies. However, these methodologies impose substantial increase of complexity, at both training and decoding (inference).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Utilizing concurrent NN designs in addition with learning the code properties, via the composed syndrome, achieved performance improvement. Further contributions to the field lie in [10] and [11], where neural decoders for convolutional codes were proposed together with befitting training methodologies. However, these methodologies impose substantial increase of complexity, at both training and decoding (inference).…”
Section: Introductionmentioning
confidence: 99%
“…However, these methodologies impose substantial increase of complexity, at both training and decoding (inference). Specifically, [10] explores a recurrent neural networks (RNN) architecture for decoding while [11] focuses on an unconstrained novel structure requiring no knowledge of the BCJR (Bahl-Cocke-Jelinek-Raviv) algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…The huge success of deep learning (DL) and neural networks (NNs), mainly in the fields of computer vision and speech processing, has recently triggered further exploration of DL for communications. The potential applications span from trainable channel decoders [1], [2], [3], [4], [5], NNbased multiple-input multiple-output (MIMO) detectors [6] and detectors for molecular channels [7] up to communication systems that inherently learn to communicate [8]. Most of these applications typically rely on block-based signal processing, which is an obvious consequence if NN structures are used that were derived from computer vision tasks.…”
Section: Introductionmentioning
confidence: 99%
“…The universal approximator theorem [18], and the fact that an explicit optimal algorithm exists, directly tells us that also an NN must exist (neglecting any complexity constraints) which comes arbitrarily close to the optimal performance. While other groups already showed the existence of such decoding NNs for short memory convolutional codes [2], [3], we want to further investigate to what extent NNs are capable of processing even more complex information sequences using the example of decoding convolutional codes up to memory ν. However, in practice the limiting factor is clearly the training complexity and, thus, the major challenge is to find a suitable training method for this task.…”
Section: Introductionmentioning
confidence: 99%
“…The decoder of any known channel code can be treated as a classifier on the noisy channel output. With the growing success of deep learning for various classification tasks, researchers first explored using DNNs to decode existing channel codes, such as linear [23], turbo [24], or polar [25] codes, which provided promising improvements in the error probability. This was later extended to the joint design of the encoder and the decoder using an autoencoder structure [26].…”
mentioning
confidence: 99%