2022
DOI: 10.3390/electronics11040599
|View full text |Cite
|
Sign up to set email alerts
|

Joint Optimization of Energy Efficiency and User Outage Using Multi-Agent Reinforcement Learning in Ultra-Dense Small Cell Networks

Abstract: With the substantial increase in spatio-temporal mobile traffic, reducing the network-level energy consumption while satisfying various quality-of-service (QoS) requirements has become one of the most important challenges facing six-generation (6G) wireless networks. We herein propose a novel multi-agent distributed Q-learning based outage-aware cell breathing (MAQ-OCB) framework to optimize energy efficiency (EE) and user outage jointly. Through extensive simulations, we demonstrate that the proposed MAQ-OCB … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…Recently, RL models have made a lot of progress and have become an interesting area for reducing energy use. In Chen et al (2022), Hoffmann et al (2021), Hsieh et al (2021, Kim et al (2022), Lee et al (2020), andSun et al (2020), deep learningbased aiding algorithms were introduced to control features in an end-to-end fashion. Data-driven schemes were applied for various optimization sections in wireless communication problems (Li et al, 2018;Sheng et al, 2021;Sun et al, 2020;Xiong et al, 2020).…”
Section: Knowledge Workmentioning
confidence: 99%
“…Recently, RL models have made a lot of progress and have become an interesting area for reducing energy use. In Chen et al (2022), Hoffmann et al (2021), Hsieh et al (2021, Kim et al (2022), Lee et al (2020), andSun et al (2020), deep learningbased aiding algorithms were introduced to control features in an end-to-end fashion. Data-driven schemes were applied for various optimization sections in wireless communication problems (Li et al, 2018;Sheng et al, 2021;Sun et al, 2020;Xiong et al, 2020).…”
Section: Knowledge Workmentioning
confidence: 99%
“…Multi-Agent Reinforcement Learning. [5] EE and SE for UAV-CRN. Delay aware-approximate optimization strategy.…”
Section: Research Topic On Uav-based Sensing Modeling Approach Refere...mentioning
confidence: 99%
“…As a result, various applications like military, telecommunication, surveillance, medical supplies delivery, and rescue operations weigh the UAVs as perfect enablers for seamless communications [4]. The UAVs are also expected to be the center of 6G wireless networks, which have gained significant attention from the research community and industry to support emerging mobile services [5].…”
Section: Introductionmentioning
confidence: 99%
“…A double-DQN-based solution for resource allocation was given in [19] to maximize the energy efficiency. Furthermore, the authors in [20] proposed a multi-agent distributed Q-learning-based algorithm to optimize energy efficiency and user outage. In [21], long short-term memory (LSTM) was exploited to control the BS on/off decision.…”
Section: Related Workmentioning
confidence: 99%