2019 IEEE Global Communications Conference (GLOBECOM) 2019
DOI: 10.1109/globecom38437.2019.9014004
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement Learning for Nested Polar Code Construction

Abstract: In this paper, we model nested polar code construction as a Markov decision process (MDP), and tackle it with advanced reinforcement learning (RL) techniques. First, an MDP environment with state, action, and reward is defined in the context of polar coding. Specifically, a state represents the construction of an (N, K) polar code, an action specifies its reduction to an (N, K − 1) subcode, and reward is the decoding performance. A neural network architecture consisting of both policy and value networks is pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 25 publications
0
12
0
Order By: Relevance
“…11 that at a similar FER performance, the decoding latency of the proposed decoder is significantly smaller than that of the RPA decoder. In addition, the proposed decoder requires a computational complexity that is several orders of magnitude lower than that of the RPA decoder for RM (3,8) and RM (4,8). As seen in Table III, the proposed decoder requires lower memory consumption than the RPA decoder for all the considered RM codes of length 256 that reaches up to 85% for RM (4,8).…”
Section: Comparison With Rpa and Srpa Decodingmentioning
confidence: 95%
See 4 more Smart Citations
“…11 that at a similar FER performance, the decoding latency of the proposed decoder is significantly smaller than that of the RPA decoder. In addition, the proposed decoder requires a computational complexity that is several orders of magnitude lower than that of the RPA decoder for RM (3,8) and RM (4,8). As seen in Table III, the proposed decoder requires lower memory consumption than the RPA decoder for all the considered RM codes of length 256 that reaches up to 85% for RM (4,8).…”
Section: Comparison With Rpa and Srpa Decodingmentioning
confidence: 95%
“…The selection criteria in (8), which is used in [16], is an oversimplification that does not take into account the existing parity constraints in the code. In fact, it treats all the constituent RM codes λ as Rate-1 codes.…”
Section: Improved Successive Factor-graph Permutations For Sc-based D...mentioning
confidence: 99%
See 3 more Smart Citations