2017
DOI: 10.3390/fi9040072
|View full text |Cite
|
Sign up to set email alerts
|

Throughput-Aware Cooperative Reinforcement Learning for Adaptive Resource Allocation in Device-to-Device Communication

Abstract: Device-to-device (D2D) communication is an essential feature for the future cellular networks as it increases spectrum efficiency by reusing resources between cellular and D2D users. However, the performance of the overall system can degrade if there is no proper control over interferences produced by the D2D users. Efficient resource allocation among D2D User equipments (UE) in a cellular network is desirable since it helps to provide a suitable interference management system. In this paper, we propose a coop… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
23
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(23 citation statements)
references
References 29 publications
0
23
0
Order By: Relevance
“…The convergence property of the actor‐critic method is much better than critic‐only method . Critic‐only methods like Q‐learning and SARSA utilize a state‐action value function. Also, these methods do not have an explicit function for the estimation of the policy.…”
Section: Introductionmentioning
confidence: 99%
“…The convergence property of the actor‐critic method is much better than critic‐only method . Critic‐only methods like Q‐learning and SARSA utilize a state‐action value function. Also, these methods do not have an explicit function for the estimation of the policy.…”
Section: Introductionmentioning
confidence: 99%
“…The result is increased performance in terms of data rate, robustness, delay, security, and energy consumption. The cooperative network coding then extended to the device to device (D2D) communication, which further increases the overall capacity and throughput of the network [2][3][4]. The utilization of D2D and cellular network in network coding decreases the packet recovery time, and meets the performance of cellular via network coding [5].…”
Section: Introductionmentioning
confidence: 99%
“…The D2D communication is a gifted component for 5G because of its two innate advantages, ie, traffic off-loading and radio resource reusing 7 capabilities. 12 The other domains of reinforcement learning for D2D communication includes deep learning for data transmission, 13 adaptive power allocation, 14 and access control and management. 8 With 5G and D2D, the focus is on indoor communications, particularly its coverage and connectivity issues.…”
Section: Introductionmentioning
confidence: 99%