2020 IEEE/ACM 24th International Symposium on Distributed Simulation and Real Time Applications (DS-RT) 2020
DOI: 10.1109/ds-rt50469.2020.9213536
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Deep Reinforcement Learning based service migration model for Mobile Edge Computing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 25 publications
0
12
0
Order By: Relevance
“…Rui et al [15] propose a novel service migration method based on state adaptation and deep reinforcement learning to overcome network failures, and they use the satisfiability modulo theory to solve the candidate space of migration policies. Liu et al [16,17] design a reinforcement learning-based framework by using a deep Q-network for a single user service migration system, which was realized to choose the optimal migration strategy in edge computing. Yuan et al [18] study the service migration and mobility optimization problem by proposing a two-branch convolution-based deep Q-network to maximize the composite utility.…”
Section: Related Workmentioning
confidence: 99%
“…Rui et al [15] propose a novel service migration method based on state adaptation and deep reinforcement learning to overcome network failures, and they use the satisfiability modulo theory to solve the candidate space of migration policies. Liu et al [16,17] design a reinforcement learning-based framework by using a deep Q-network for a single user service migration system, which was realized to choose the optimal migration strategy in edge computing. Yuan et al [18] study the service migration and mobility optimization problem by proposing a two-branch convolution-based deep Q-network to maximize the composite utility.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, some researchers have proposed novel service migration algorithm and architecture to support mobility tasks based on reinforcement learning, which can efficiently reduce the extra delay and energy cost of the migration process [35]. The literature [36] modeled the service migration problem as a complex optimization and implemented deep reinforcement learning to approximate the optimal policy.…”
Section: Service Migration In Edge Computing (Ec)mentioning
confidence: 99%
“…To show the superiority of the proposed DCOS algorithm in terms of delay and energy metrics, we compare it with two baseline schemes, i.e., ❼ Deep Deterministic P olicy Gradient (DDPG) [43]: The scheduling of tasks is based on the task offloading strategy only, and the DQN network outputs the optimal target node for offloading. ❼ E xtensive S ervice M igration Model (ESM ) [36]: The task is processed on the local node. If the node has not configured the related service, the system model performs the service migration, according to the optimal policy related to the migration costs.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…In order to verify the average completion time, we also implemented Extensive Service Migration(ESM) [19], Always Migration(AM) [20], Counterfactual Multi-Agent (COMA) [21], Never Migration (NM) as a baseline. AWDDPG adopts the experience replay mechanism to reduce the correlation between samples, and designs an adaptive weighted sampling method to increase the sampling efficiency, which greatly increases the convergence speed and stability.…”
Section: Performance Evaluationmentioning
confidence: 99%