IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
DOI: 10.1109/ijcnn.1999.836193
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional decoding using recurrent neural networks

Abstract: In this paper we show how recurrent neural network (RNN) convolutional decoders can be derived. As an example, we derive the RNN decoder for 1/2 rate code with constraint length S. The derived RNN decoder is tested in Gaussian channel and the results are compared to results of optimal Viterbi decoder. Some simulation results for other constraint length codes are also given. The RNN decoder is tested also with the punctured code. It is seen that RNN decoder can achieve the performance of the Viterbi decoder. Th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
6
0

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(6 citation statements)
references
References 5 publications
0
6
0
Order By: Relevance
“…On the other hand, it opened doors to different dimensions of collaboration Trosterud and Moshagen (2021), and even new ones beyond the original infrastructure itself, e.g., Snoek et al (2014), Simonenko, Alexandra (2020), Alnajja et al (2020). This also went beyond the principals foreseen by the originators Hämäläinen and Wiechetek (2020), Khanna et al (2021).…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, it opened doors to different dimensions of collaboration Trosterud and Moshagen (2021), and even new ones beyond the original infrastructure itself, e.g., Snoek et al (2014), Simonenko, Alexandra (2020), Alnajja et al (2020). This also went beyond the principals foreseen by the originators Hämäläinen and Wiechetek (2020), Khanna et al (2021).…”
Section: Introductionmentioning
confidence: 99%
“…Accurate identification of sag sources and their fault phases is conducive to analyze, compensate and suppress local voltage sag. It can also be Convolutional Neural Networks (CNN) [28][29][30][31][32] contains two modules of feature extraction and classifier, in which the weights are updated continuously with the iterative process in training. The feature extraction neural network is mainly composed of multiple convolution layers and pooling layers.…”
Section: Introductionmentioning
confidence: 99%
“…As such, other candidate approaches must be studied. This work compares gradient adaptation, proposed originally in [22] for decoding and apparent also in [24], [25], as well as a modified form of belief propagation proposed in section 4.…”
Section: Introductionmentioning
confidence: 99%