2021
DOI: 10.1109/tnsm.2021.3087258
|View full text |Cite
|
Sign up to set email alerts
|

DMRO: A Deep Meta Reinforcement Learning-Based Task Offloading Framework for Edge-Cloud Computing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
36
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 148 publications
(36 citation statements)
references
References 38 publications
0
36
0
Order By: Relevance
“…Recently, an efficient algorithm based on deep meta reinforcement learning was proposed in [25] for IoT-edge-cloud computing systems with the goal of reducing the burden of computing and improving the processing of tasks. Specifically, a different number of DNNs are combined with Q-learning to derive an efficient offloading decision and increase the learning ability of the system.…”
Section: Single Edge Servermentioning
confidence: 99%
“…Recently, an efficient algorithm based on deep meta reinforcement learning was proposed in [25] for IoT-edge-cloud computing systems with the goal of reducing the burden of computing and improving the processing of tasks. Specifically, a different number of DNNs are combined with Q-learning to derive an efficient offloading decision and increase the learning ability of the system.…”
Section: Single Edge Servermentioning
confidence: 99%
“…There are also substantial studies which use RL as an aside system in cooperation with mobile-cloud system, to deal with optimization issues including online resource allocation [337], task scheduling [338], workload scheduling [339], computation offloading [340]- [346], and service mitigation [347]. Among these works, applications are implemented on the Internet of Thing (IOT) [337], [338], [340], [343], 5G network [342], telemonitoring [346], or vehicular terminals [345]. The basic motivation is that those optimization problems are generally NP-hard and easier to solve by DRL built on MDP [343], [344].…”
Section: Rl-assisted Optimizationmentioning
confidence: 99%
“…Among these works, applications are implemented on the Internet of Thing (IOT) [337], [338], [340], [343], 5G network [342], telemonitoring [346], or vehicular terminals [345]. The basic motivation is that those optimization problems are generally NP-hard and easier to solve by DRL built on MDP [343], [344]. The detailed RL methods applied in these attempts include Q-Learning [339], [346], DQN [337], [340], REINFORCE [338], DDPG [341], PPO [345], and meta-RL [343].…”
Section: Rl-assisted Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…However, due to the limited computing resources and battery capacity of IoT devices, these computation-intensive tasks cannot be fully handled [3][4][5]. To cope with this situation, mobile cloud computing (MCC) came into being [6][7][8]. However, offloading tasks to cloud computing centers for processing will introduce significant delays [9][10][11].…”
Section: Introductionmentioning
confidence: 99%