2022 Thirteenth International Conference on Ubiquitous and Future Networks (ICUFN) 2022
DOI: 10.1109/icufn55119.2022.9829643
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning and Power Allocation Analysis in NOMA System

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…It is supposed that the implemented channel model is stationary throughout one frame transmission of data and pilot signals and the channel parameters are varying from one frame to another. The basic architecture of the channel prediction scenario based on the developed Q -learning procedure employed in our examined network is illustrated in Figure 3 , which primarily consists of several stages [ 17 , 43 ].…”
Section: Q-learning Network Architecturementioning
confidence: 99%
See 1 more Smart Citation
“…It is supposed that the implemented channel model is stationary throughout one frame transmission of data and pilot signals and the channel parameters are varying from one frame to another. The basic architecture of the channel prediction scenario based on the developed Q -learning procedure employed in our examined network is illustrated in Figure 3 , which primarily consists of several stages [ 17 , 43 ].…”
Section: Q-learning Network Architecturementioning
confidence: 99%
“…Essentially, path loss and the distance between every user terminal and the BS need to be specified in the dataset to facilitate the random generation of the channel weights for every user device in the examined MISO-NOMA network [ 43 ]. In the beginning, pilot symbols are created, transmitted, and identified at the BS and at the receiver of every device.…”
Section: Q-learning Network Architecturementioning
confidence: 99%
“…In our proposed DQN approach, the length of each training sequence is specified as L, which is the dimension of the input layer. In our scenario, we choose the input layer of the DNN to include 128 neurons, and the input states to the input layer will be shifted to the subsequent layer after updating the weight parameters [13,22].…”
mentioning
confidence: 99%
“…The design of a single LSTM cell is basically shown in Figure 3 [13,22]. Each LSTM cell has three inputs and two output parameters.…”
mentioning
confidence: 99%
See 1 more Smart Citation