2021
DOI: 10.48550/arxiv.2103.02649
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Self-play Learning Strategies for Resource Assignment in Open-RAN Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…We conduct an additional set of experiments comparing against (Wang et al, 2021), which adopts the RR algorithm , where L π denotes the length of the minimum bounding box that encompass all packed boxes at the end of episodes; as in Sec.3.4, we train our agent attend2pack with the utility reward r u but compare to the results of (Wang et al, 2021) using their measure r L . The packing visualizations and the final testing results are shown in Fig.…”
Section: Appendix: Additional Ablation Study On the Sequence Policymentioning
confidence: 99%
“…We conduct an additional set of experiments comparing against (Wang et al, 2021), which adopts the RR algorithm , where L π denotes the length of the minimum bounding box that encompass all packed boxes at the end of episodes; as in Sec.3.4, we train our agent attend2pack with the utility reward r u but compare to the results of (Wang et al, 2021) using their measure r L . The packing visualizations and the final testing results are shown in Fig.…”
Section: Appendix: Additional Ablation Study On the Sequence Policymentioning
confidence: 99%
“…[28] shows the scheme of effective energy using through RL dynamic function splitting. [29] demonstrates an application of using RL for computational resource allocation between RUs and DUs, which has the potential to reduce power consumption significantly.…”
Section: O-cloudmentioning
confidence: 99%