2018 IEEE International Conference on Communications (ICC) 2018
DOI: 10.1109/icc.2018.8422289
|View full text |Cite
|
Sign up to set email alerts
|

Performance Evaluation of Channel Decoding with Deep Neural Networks

Abstract: With the demand of high data rate and low latency in fifth generation (5G), deep neural network decoder (NND) has become a promising candidate due to its capability of one-shot decoding and parallel computing. In this paper, three types of NND, i.e., multi-layer perceptron (MLP), convolution neural network (CNN) and recurrent neural network (RNN), are proposed with the same parameter magnitude. The performance of these deep neural networks are evaluated through extensive simulation. Numerical results show that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
61
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 74 publications
(62 citation statements)
references
References 11 publications
(13 reference statements)
0
61
0
1
Order By: Relevance
“…We then investigate employing CNNs for this task. With ECC decoding, CNNs and MLP networks have similar number of weights in order to achieve similar performance [31]. In the following we outline our findings that are unique to CS decoding.…”
Section: Results and Outlookmentioning
confidence: 96%
See 3 more Smart Citations
“…We then investigate employing CNNs for this task. With ECC decoding, CNNs and MLP networks have similar number of weights in order to achieve similar performance [31]. In the following we outline our findings that are unique to CS decoding.…”
Section: Results and Outlookmentioning
confidence: 96%
“…We use the notation h = [h 1 , h 2 , ..., h L ] to represent a network with L hidden layers, where h l denotes the number of neurons in the fully connected layer l, or the number of kernels in the convolutional layer l. In recent works that apply DNNs to decode ECCs, the training set explodes rapidly as the source word length grows. For example, with a rate 0.5 (n = 1024, k = 512) ECC, one epoch consists of 2 512 possibilities of codewords of length 1024, which results in very large complexity and makes it difficult to train and implement DNN-based decoding in practical systems [28], [29], [31], [32]. However, we note that in FL CS decoding, this problem does not exist since CS source words are typically considerably shorter, possibly only up to a few dozen symbols [1], [6]- [17].…”
Section: Results and Outlookmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the optimal flipping strategy (which bit to flip) is still an open problem due to the lack of a complete mathematical characterization. When an optimal solution is unavailable, deep learning algorithms are worth trying [9]- [12]. Recently, deep learning [13] has achieved tremendous progress in tasks such as Go [14], image classification [15], machine translation [16].…”
Section: Introductionmentioning
confidence: 99%