2021
DOI: 10.1109/access.2021.3073902
|View full text |Cite
|
Sign up to set email alerts
|

A Deep Reinforcement Learning-Based Dynamic Computational Offloading Method for Cloud Robotics

Abstract: Robots come with a variety of computing capabilities, and running computationallyintense applications on robots is sometimes challenging on account of limited onboard computing, storage, and power capabilities. Meanwhile, cloud computing provides on-demand computing capabilities, and thus combining robots with cloud computing can overcome the resource constraints robots face. The key to effectively offloading tasks is an application solution that does not underutilize the robot's own computational capabilities… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(3 citation statements)
references
References 57 publications
0
3
0
Order By: Relevance
“…The authors of [22], [23] stress the imperative of avoiding under-utilizing a robot's onboard computational resources for efficient offloading. To address this, they formulate the application offloading problem as a Markovian decision process and propose a solution employing deep reinforcement learning through a deep Q-network (DQN) approach.…”
Section: A Related Workmentioning
confidence: 99%
“…The authors of [22], [23] stress the imperative of avoiding under-utilizing a robot's onboard computational resources for efficient offloading. To address this, they formulate the application offloading problem as a Markovian decision process and propose a solution employing deep reinforcement learning through a deep Q-network (DQN) approach.…”
Section: A Related Workmentioning
confidence: 99%
“…In situations where immediate decisions are paramount, DRL can prove advantageous. For instance, in the context of cloud robotics, as demonstrated by Penmetcha et al [119], DRL-based dynamic computational offloading methods can yield rapid decisions, with a mean computation time of 71.28 milliseconds, while achieving a commendable final accuracy of 84%. This showcases the potential of DRL in scenarios where timely responses are essential.…”
Section: Navigating the Time-accuracy Balance In Drl Applicationsmentioning
confidence: 99%
“…The latter are particularly relevant when robots or mobile devices must compete for limited offloading resources (whether the limit is due to computation resources or network connectivity limitations). Recently deep reinforcement learning approaches have been applied to learn the offloading decision-making process [23], [24].…”
Section: Related Workmentioning
confidence: 99%