2019
DOI: 10.1109/jiot.2019.2913162
|View full text |Cite
|
Sign up to set email alerts
|

iRAF: A Deep Reinforcement Learning Approach for Collaborative Mobile Edge Computing IoT Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
61
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 190 publications
(61 citation statements)
references
References 43 publications
0
61
0
Order By: Relevance
“…Reference [36] used DDQN on virtual edge computing and obtained good performance in offloading computing. The literature [37] applied the Monte Carlo tree search algorithm to the resource allocation of MEC, and the performance of the scheme was significantly better than that of DQN. Reference [38] combined the computing offload of edge computing with block chain, comprehensively considered the time delay energy consumption and the cost of the block chain, and achieved a good effect with the DRL-based algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…Reference [36] used DDQN on virtual edge computing and obtained good performance in offloading computing. The literature [37] applied the Monte Carlo tree search algorithm to the resource allocation of MEC, and the performance of the scheme was significantly better than that of DQN. Reference [38] combined the computing offload of edge computing with block chain, comprehensively considered the time delay energy consumption and the cost of the block chain, and achieved a good effect with the DRL-based algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…Generally, to achieve better results, other heuristic algorithms and pre-trained neural networks are needed. In the work of Chen et al, the DQN-based method should learn a policy by experience transitions with a well-trained Deep Neural Network(DNN) [18]. In order to optimize computational offloading under our settings with high performances, we apply Dueling DQN, an improved DQN method for resource resign and allocation in MEC system.…”
Section: Related Workmentioning
confidence: 99%
“…To reduce the collisions between the system and client effectively, the access method is designed using rule-based algorithms and RL. RL is also utilized as a means of selecting the appropriate channel [16][17][18][19]. RL is applied to the problem of choosing a route to escape to a destination by avoiding obstacles.…”
Section: Learning From Demonstrationmentioning
confidence: 99%