2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2020
DOI: 10.1109/smc42975.2020.9283380
|View full text |Cite
|
Sign up to set email alerts
|

An Adaptive Deep Q-learning Service Migration Decision Framework for Connected Vehicles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 11 publications
0
1
0
Order By: Relevance
“…2) Task Migration in Deep Learning Mode: In [33], the authors propose a multi-agent deep reinforcement learning algorithm that maximizes the comprehensive utility of communication, computing and routing planning in a distributed manner, thus reducing service delay, migration cost and travel time. The authors in [34] proposed a deep Q-learning service migration decision algorithm and a neural networkbased service migration framework that realizes the adaptive migration of task offloading when connected vehicles move. To minimize the data processing cost within the system and ensure the delay constraints of applications, the author in [35] formulates a unified communication, computing, caching and collaborative computing framework and develops a cooperative data scheduling scheme to model the data scheduling as a deep reinforcement learning problem which is solved by an enhanced deep Q-Network algorithm with a single target Qnetwork.…”
Section: Task Migrationmentioning
confidence: 99%
“…2) Task Migration in Deep Learning Mode: In [33], the authors propose a multi-agent deep reinforcement learning algorithm that maximizes the comprehensive utility of communication, computing and routing planning in a distributed manner, thus reducing service delay, migration cost and travel time. The authors in [34] proposed a deep Q-learning service migration decision algorithm and a neural networkbased service migration framework that realizes the adaptive migration of task offloading when connected vehicles move. To minimize the data processing cost within the system and ensure the delay constraints of applications, the author in [35] formulates a unified communication, computing, caching and collaborative computing framework and develops a cooperative data scheduling scheme to model the data scheduling as a deep reinforcement learning problem which is solved by an enhanced deep Q-Network algorithm with a single target Qnetwork.…”
Section: Task Migrationmentioning
confidence: 99%