2022
DOI: 10.1109/twc.2022.3181747
|View full text |Cite
|
Sign up to set email alerts
|

Mobile Reconfigurable Intelligent Surfaces for NOMA Networks: Federated Learning Approaches

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(9 citation statements)
references
References 42 publications
0
7
0
Order By: Relevance
“…2) FL for RIS-aided Wireless Communications: On the other hand, FL can also be used to optimize the performance of RIS-aided wireless communications. For example, FL is used in [157] and [158] for average rate maximization, in which local models are deployed in user devices and the global model is aggregated by edge servers. In [157], federated neural networks consider sampled channel vectors as input to predict achievable rates.…”
Section: Reinforcement Learning-based Optimizationmentioning
confidence: 99%
See 2 more Smart Citations
“…2) FL for RIS-aided Wireless Communications: On the other hand, FL can also be used to optimize the performance of RIS-aided wireless communications. For example, FL is used in [157] and [158] for average rate maximization, in which local models are deployed in user devices and the global model is aggregated by edge servers. In [157], federated neural networks consider sampled channel vectors as input to predict achievable rates.…”
Section: Reinforcement Learning-based Optimizationmentioning
confidence: 99%
“…In [157], federated neural networks consider sampled channel vectors as input to predict achievable rates. FL and DDPG are combined in [158], and the local neural networks used in DDPG will be aggregated and updated.…”
Section: Reinforcement Learning-based Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, paper [13] proposed a discard policy to exclude the untrustworthy users from the batch of users taking part to the learning process, whenever an incorrect model was injected by users. The FL was also applied in [14] to support the training needed to handle mobile reconfigurable intelligent surfaces (RISs) and the users power allocation, aiming at improving channel quality, spectrum efficiency and users data rate. Non-orthogonal multiple access techniques were exploited, and a deep-reinforcement learning strategy was applied to optimize the performance.…”
Section: Related Workmentioning
confidence: 99%
“…In this context, they proposed a framework in which the IRS is deployed at an AP and NOMA is used at the AP to serve multiple robots. In a mobile IRS scenario, the authors in [16] proposed a model in which IRSs are mounted on intelligent robots for flexible deployment. A deep deterministic policy gradient (DDPG) framework is used to optimize power allocation and the phase shift.…”
Section: A Related Workmentioning
confidence: 99%