2019
DOI: 10.1109/tvt.2019.2924015
|View full text |Cite
|
Sign up to set email alerts
|

Online Deep Reinforcement Learning for Computation Offloading in Blockchain-Empowered Mobile Edge Computing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
91
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 238 publications
(91 citation statements)
references
References 39 publications
0
91
0
Order By: Relevance
“…Among all these AI technologies, Q-learning and its derivate, DQN, are in the spotlight. For example, [42] designs a Q-learning-based algorithm for computation offloading. Concretely, it formulates the computation offloading problem as a non-cooperative game in multi-user multi-server edge computing systems and proves that Nash Equilibrium exists.…”
Section: Problem Definition Model Construction Algorithm Designmentioning
confidence: 99%
“…Among all these AI technologies, Q-learning and its derivate, DQN, are in the spotlight. For example, [42] designs a Q-learning-based algorithm for computation offloading. Concretely, it formulates the computation offloading problem as a non-cooperative game in multi-user multi-server edge computing systems and proves that Nash Equilibrium exists.…”
Section: Problem Definition Model Construction Algorithm Designmentioning
confidence: 99%
“…P π(s) s,st+1 denotes the transition probability unknown in reality, and π(s) is the action generated under a specific policy. Based on (17) and (18), the cost functions (11), (12) can be rewritten as:…”
Section: A Bellman Equationmentioning
confidence: 99%
“…In order to achieve the ability to learn automatically, we design the updating steps as following [35]: 1) evaluating result: obtain V i,π (s) and V total,π (q) according to (17) and (18) based on the policy π for all status.…”
Section: A Bellman Equationmentioning
confidence: 99%
See 2 more Smart Citations