2018
DOI: 10.1109/lcomm.2018.2792019
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning-Aided SCMA

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
132
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 178 publications
(133 citation statements)
references
References 11 publications
0
132
0
1
Order By: Relevance
“…We also note that DNNs that are proposed in communication systems could have many more parameters than the 9536 we use in this network. For example, [47] proposes an MLP network with 4 hidden layers where each layer has 512 neurons, which results in at least 512 × 512 × 3 = 786432 parameters. Thus a larger CNN can be trained to decode fixedlength capacity-approaching CS codes with longer codewords.…”
Section: Results and Outlookmentioning
confidence: 99%
“…We also note that DNNs that are proposed in communication systems could have many more parameters than the 9536 we use in this network. For example, [47] proposes an MLP network with 4 hidden layers where each layer has 512 neurons, which results in at least 512 × 512 × 3 = 786432 parameters. Thus a larger CNN can be trained to decode fixedlength capacity-approaching CS codes with longer codewords.…”
Section: Results and Outlookmentioning
confidence: 99%
“…In addition, many specific aspects of communication systems are being studied from machine learning perspective, including modulation recognition [38], PAPR reduction [39], wireless interference identification [40], and so on. However, speaking to DL for SCMA, currently to the authors knowledge, very limited related works have been conducted or published besides the article [41], which only takes account of autoencoders without DCMA extending.…”
Section: Related Workmentioning
confidence: 99%
“…For this specific simulation, there are 512 hidden nodes for each hidden layer in D-SCMA+DNN, while DL-SCMA has only 48. There are 6 hidden layers each with 32 hidden nodes in the network in [41] for generating learned codebooks used by D-SCMA+MPA and D-SCMA+DNN, while the corresponding encoder part of AE-SCMA has 4 layers each also with 32 nodes. Therefore, in terms of computation, D-SCMA+DNN and its codebook generating DNN are more complex than the proposed DL-SCMA and AE-SCMA, respectively.…”
Section: Computational Complexitymentioning
confidence: 99%
“…The MMSE of the optimal one-step channel predictor for spatial multiplexing transmission at the k-th time index is given as (17). As seen, the Weiner filter predictor in (14) requires a priori knowledge about the channel statistics through matrixes A k−1 and R d k−1 .…”
Section: A Dd-ce For Spatial Multiplexing Using Wiener Predictormentioning
confidence: 99%