The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.1109/tpds.2020.3046737
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Task Migration Optimization in MEC by Extending Multi-Agent Deep Reinforcement Learning Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 88 publications
(32 citation statements)
references
References 28 publications
0
32
0
Order By: Relevance
“…Offloading can be used to balance the load among cores or processors in a multiprocessor system. This is often regarded as task migration that aims at moving the task execution from one core/processor using a given performance metric: power consumption, thermal energy, and dark silicon [25]. Communication-driven task migration attempts to migrate tasks to adjacent cores.…”
Section: Related Workmentioning
confidence: 99%
“…Offloading can be used to balance the load among cores or processors in a multiprocessor system. This is often regarded as task migration that aims at moving the task execution from one core/processor using a given performance metric: power consumption, thermal energy, and dark silicon [25]. Communication-driven task migration attempts to migrate tasks to adjacent cores.…”
Section: Related Workmentioning
confidence: 99%
“…More details are given as follows: S1. After mapping the focused objective function and constraints in the MEC-based networks into the states, actions, and reward in the MDP, the RL algorithms are directly employed to the original optimization problem, such as in [69]- [71]. Among them, many advanced RL algorithms are applied to balance the convergence speed and learning accuracy, such as DDPG and A3C.…”
Section: B Reinforcement Learning-empowered Mecmentioning
confidence: 99%
“…In order to verify the average completion time, we also implemented Extensive Service Migration(ESM) [19], Always Migration(AM) [20], Counterfactual Multi-Agent (COMA) [21], Never Migration (NM) as a baseline. AWDDPG adopts the experience replay mechanism to reduce the correlation between samples, and designs an adaptive weighted sampling method to increase the sampling efficiency, which greatly increases the convergence speed and stability.…”
Section: Performance Evaluationmentioning
confidence: 99%