2021
DOI: 10.1109/lwc.2021.3092947
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning Based CSI Compression and Quantization With High Compression Ratios in FDD Massive MIMO Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…The NMSE of ENet [41] is −11.20 dB in indoor scenarios when η equals 1/32, which has been surpassed by our model. In outdoor scenarios, a neural network named CsiNet+DNN proposed in [42] gets remarkable results at the expense of extremely high computational overhead. Compared to TransNet+ and other attention models, the deployment resource overhead of CsiNet+DNN may be significantly larger.…”
Section: Discussionmentioning
confidence: 99%
“…The NMSE of ENet [41] is −11.20 dB in indoor scenarios when η equals 1/32, which has been surpassed by our model. In outdoor scenarios, a neural network named CsiNet+DNN proposed in [42] gets remarkable results at the expense of extremely high computational overhead. Compared to TransNet+ and other attention models, the deployment resource overhead of CsiNet+DNN may be significantly larger.…”
Section: Discussionmentioning
confidence: 99%
“…For example, NMSE is reduced from −17.36 dB to −20.80 dB when CR is 1/4. Based on [52], CsiNet+ does not work well when CR is low, such as 1/32 for outdoor channels. Therefore, two FC layers are embedded after the second convolutional layer in the RefineNet block, and more RefineNet blocks are employed.…”
Section: A Novel Nn Architecture Designmentioning
confidence: 99%
“…CsiNet+DNN [52] Embedding two FC layers after the second convolutional layer in the RefineNet block; Employing more RefineNet block; MRNet [59] Setting the convolutional sizes of the encoder and the decoder as 5 × 5 and 8 × 8; CS-ReNet [60] Stacking seven convolutional layers with 3 × 3 filters at the decoder; BCsiNet [61] Stacking three 3 × 3 convolutional layers at the encoder to improve CSI feature quality;…”
Section: Increasing Receptive Fieldmentioning
confidence: 99%
“…It also designs an offset network to compensate for the quantization errors. [27] equips the µ-law quantizer into its system and improves the feedback performance through end-to-end training. Focusing on the non-differentiable problem brought by the quantization module, [28] designs a differentiable function to approximate the gradients.…”
Section: Related Workmentioning
confidence: 99%