2010
DOI: 10.1002/bltj.20463
|View full text |Cite
|
Sign up to set email alerts
|

A fuzzy reinforcement learning approach for self-optimization of coverage in LTE networks

Abstract: many of the on-site operations. Additionally, about 24 percent of a typical wireless operator's revenue is normally spent on network OPEX, including training, support, power, transmission, and site rental [14]. Self-optimization functions can reduce the workload for site survey and analysis of network performance and thus reduce OPEX. Moreover, energy-saving functions enabled by self-optimization capabilities reduce costs for power consumed by the equipment. Additionally, the improved quality of user experienc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
39
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 66 publications
(39 citation statements)
references
References 11 publications
0
39
0
Order By: Relevance
“…The algorithm stores past successful optimization instances that improved the performance in the memory and applies these instances directly to new situations. In [19][20][21], a fuzzy Q-learning algorithm was used to learn the optimal antenna tilt control policy based on the continuous inputs of current antenna configuration and corresponding performance, and output of the optimized antenna configuration. Yet, the impact on neighboring cells due to such an adjustment was neglected.…”
Section: Related Workmentioning
confidence: 99%
“…The algorithm stores past successful optimization instances that improved the performance in the memory and applies these instances directly to new situations. In [19][20][21], a fuzzy Q-learning algorithm was used to learn the optimal antenna tilt control policy based on the continuous inputs of current antenna configuration and corresponding performance, and output of the optimized antenna configuration. Yet, the impact on neighboring cells due to such an adjustment was neglected.…”
Section: Related Workmentioning
confidence: 99%
“…see [15,16] to introduce autonomic capabilities in network control systems, and is a combination of fuzzy logic [17] with Q-learning (type of Reinforcement Learning (RL)) [18] that aims to combine the robustness of a rule based fuzzy system with the adaptability and learning capabilities of Q-learning. In this Section we highlight the main concepts and benefits of this approach and its applicability in the context of PCN-based AC.…”
Section: Fuzzy Q-learning Pcn-based Admission Controlmentioning
confidence: 99%
“…The objective of an agent is to find, by trying out all the possible actions when being in a given state, the action that maximizes its long term reward. The detailed mathematical foundation and formulation of Q-learning can be found in [15,16,18] therefore it is not repeated here, due to space limitations; the core Q-learning algorithm [18] is provided though, so as to highlight the parameters involved in it and consequently in our evaluation in the following Session.…”
Section: Fuzzy Q-learning Conceptsmentioning
confidence: 99%
See 1 more Smart Citation
“…Several optimization algorithms based on antenna tilt modifications can be found in the literature. Many of them are used as COC algorithms [16][17][18]. These works present different methodologies.…”
Section: Introductionmentioning
confidence: 99%