2017
DOI: 10.48550/arxiv.1711.09012
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Mobile Edge Computation Offloading Using Game Theory and Reinforcement Learning

Abstract: Due to the ever-increasing popularity of resourcehungry and delay-constrained mobile applications, the computation and storage capabilities of remote cloud has partially migrated towards the mobile edge, giving rise to the concept known as Mobile Edge Computing (MEC). While MEC servers enjoy the close proximity to the end-users to provide services at reduced latency and lower energy costs, they suffer from limitations in computational and radio resources, which calls for fair efficient resource management in t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 14 publications
0
7
0
Order By: Relevance
“…RL techniques have been used as promising solutions to tackle this challenge based on the trial-and-error rule, where the RL agent, i.e., the user, can adjust its policy to achieve the best long-term goal according to the future reward feedback from the environment without prior knowledge of system models. In [108], [119], the authors investigated the dynamic computation offloading process and developed RL algorithms to learn the optimal offloading mechanism with the goal of minimizing latency and choosing the energyefficient edge server. Besides, the DRL algorithm has been proven to be more effective for enabling RL to handle large state spaces by leveraging the powerful DNNs to approximate state-action values, which is envisioned to solve complex sequential decision-making problems.…”
Section: A Drl For Computation Offloadingmentioning
confidence: 99%
“…RL techniques have been used as promising solutions to tackle this challenge based on the trial-and-error rule, where the RL agent, i.e., the user, can adjust its policy to achieve the best long-term goal according to the future reward feedback from the environment without prior knowledge of system models. In [108], [119], the authors investigated the dynamic computation offloading process and developed RL algorithms to learn the optimal offloading mechanism with the goal of minimizing latency and choosing the energyefficient edge server. Besides, the DRL algorithm has been proven to be more effective for enabling RL to handle large state spaces by leveraging the powerful DNNs to approximate state-action values, which is envisioned to solve complex sequential decision-making problems.…”
Section: A Drl For Computation Offloadingmentioning
confidence: 99%
“…Optimal strategies by each player can be selected by an adaptive learning method known as RL. RL uses historical information for the selection of strategies, such as status of network, strategies, and utility of other players [ 67 , 68 , 69 ]. Thus, an effective technique of decision-making is RL-based game theory.…”
Section: Machine Learning and Deep Learning In Nidsmentioning
confidence: 99%
“…Through continuous testing, it rewards and punishes a series of actions and then modifies its strategy. After such continuous adjustments, the UE can learn which actions should be chosen in order to achieve the best return in certain situations, as in [165]- [167]. In addition, there are Q-learning [9], deep reinforcement learning (DRL) [168]- [170], and other techniques.…”
Section: B Several Research Directions Related To Computation Offloadingmentioning
confidence: 99%