2021
DOI: 10.1109/lcomm.2021.3114118
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Construct Nested Polar Codes: An Attention-Based Set-to-Element Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…Recent studies have shown that incorporating a permutation equivariance property into the neural network architecture can reduce the parameter space, avoid a large number of unnecessary permuted training samples, and most importantly make the neural network generalizable to different problem scales [60][61][62][63]. In particular, graph neural networks (GNNs) [60,61] and attention-based transformers [62,63] have been shown to possess the permutation equivariance property and have demonstrated superior performance, scalability, and generalization ability in wireless resource allocation problems. For instance, in the beamforming problem, a GNN trained with data generated in a setting of 50 users was shown to achieve near optimal testing performance under a much larger setting of 1000 users [60].…”
Section: Scalability Of Ai-based Methodsmentioning
confidence: 99%
“…Recent studies have shown that incorporating a permutation equivariance property into the neural network architecture can reduce the parameter space, avoid a large number of unnecessary permuted training samples, and most importantly make the neural network generalizable to different problem scales [60][61][62][63]. In particular, graph neural networks (GNNs) [60,61] and attention-based transformers [62,63] have been shown to possess the permutation equivariance property and have demonstrated superior performance, scalability, and generalization ability in wireless resource allocation problems. For instance, in the beamforming problem, a GNN trained with data generated in a setting of 50 users was shown to achieve near optimal testing performance under a much larger setting of 1000 users [60].…”
Section: Scalability Of Ai-based Methodsmentioning
confidence: 99%
“…Recent studies have shown that incorporating permutation equivariance property into the neural network architecture can reduce the parameter space, avoid a large number of unnecessary permuted training samples, and most importantly make the neural network generalizable to different problem scales [54]- [57]. In particular, graph neural networks (GNNs) [54], [55] and attention-based transformers [56], [57] have been shown to possess the permutation equivariance property and have demonstrated superior performance, scalability, and generalization ability in a few wireless resource allocation problems. For instance, in the beamforming problem, a GNN trained with data generated in a setting of 50 users was shown to achieve near optimal testing performance under a much larger setting of 1000 users [54].…”
Section: Scalability Of Ai-based Methodsmentioning
confidence: 99%
“…Huang et al [185] Construction of nested polar codes via advantage actor-critic algorithms. Li et al [189] Stochastic policy optimization by a customized network. Ankireddy et al [190] Nested polar code construction based on sequence modeling and Transformer.…”
Section: Nested Polar Codesmentioning
confidence: 99%
“…Meanwhile, the authors in [189] first transformed the problem of nested polar code construction into a stochastic policy optimization problem for sequential decision, and then represented the policy by a customized neural network. Furthermore, the authors proposed a gradient-based algorithm to minimize the average loss of the policy.…”
mentioning
confidence: 99%