2021
DOI: 10.48550/arxiv.2103.01801
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Reinforcement Learning for URLLC data management on top of scheduled eMBB traffic

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…A DRL based network slicing algorithm is proposed in [111] to slice the total available resources between the URLLC and eMBB traffics. In the algorithm the full timefrequency resource is allocated for eMBB then an optimal policy is trained dynamically to allocate the resource by puncturing the eMBB codewords.…”
Section: ML Based Joint Schedulingmentioning
confidence: 99%
“…A DRL based network slicing algorithm is proposed in [111] to slice the total available resources between the URLLC and eMBB traffics. In the algorithm the full timefrequency resource is allocated for eMBB then an optimal policy is trained dynamically to allocate the resource by puncturing the eMBB codewords.…”
Section: ML Based Joint Schedulingmentioning
confidence: 99%
“…The simulation results for different slices show that this method has a higher transmission rate and SIR close to the optimal method compared to static and optimal (selection of suitable sub-channel with exhaustive search) methods. In [89], a slice-based virtual resource scheduling scheme for eMBB and uRLLC based on PPO is proposed for allocating sub-carriers to uRLLC and eMBB slices, with the aim to minimize the latency for uRLLC packages. To maximize the rate and realtime response to uRLLC slices, the eMBB packets are punctured and the uRLLC packets are embedded in the eMBB packets.…”
Section: ) Resource Sharingmentioning
confidence: 99%
“…In recent years, the use of distributed methods (such as transfer learning and federated learning) in ML has been considered due to the common challenges in centralized methods [79], [80], [81], [82], [84], [85], [86], [87], [88], [89], [90], [91], [92], [83], [93], [94], [95], [110], [111], [112], [113], [114], [115], [116], [117], [118], [136] ✓ ✓ ---- [96], [97], [98], [99], [100], [103], [104], [105], [123], [124], [135] [107], [108], [109], [126], [127], [128], [35], [129], [130] such as centralization, security, privacy, and time and computational complexity. In distributed methods, a ML mode...…”
Section: Distributed Learningmentioning
confidence: 99%