2021
DOI: 10.3390/app11188299
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous Exploration of Mobile Robots via Deep Reinforcement Learning Based on Spatiotemporal Information on Graph

Abstract: In this paper, we address the problem of autonomous exploration in unknown environments for ground mobile robots with deep reinforcement learning (DRL). To effectively explore unknown environments, we construct an exploration graph considering historical trajectories, frontier waypoints, landmarks, and obstacles. Meanwhile, to take full advantage of the spatiotemporal feature and historical information in the autonomous exploration task, we propose a novel network called Spatiotemporal Neural Network on Graph … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…Although the robot is a brief model based on the particle in this environment, the size of the environment is fixed and the environment is highly structured, this is still a brave attempt in the exploration field and has confirmed the possibility of using reinforcement learning algorithms to solve this problem. Zhang et al (2021) and Chen et al (2020) propose a reinforcement learning method using a graph convolution neural network to solve the decision-making problem in exploration and train the policy in many random maps, taking the historical position, current position, observation landmark and candidate frontiers of the robot as the vertices of the graph and connecting these vertices by some rules to form a graph. However, this method still relies on a series of processing on occupancy grid maps such as frontier detection to a certain extent, and when the environmental characteristics change, the efficiency of this graph-based method is not high.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…Although the robot is a brief model based on the particle in this environment, the size of the environment is fixed and the environment is highly structured, this is still a brave attempt in the exploration field and has confirmed the possibility of using reinforcement learning algorithms to solve this problem. Zhang et al (2021) and Chen et al (2020) propose a reinforcement learning method using a graph convolution neural network to solve the decision-making problem in exploration and train the policy in many random maps, taking the historical position, current position, observation landmark and candidate frontiers of the robot as the vertices of the graph and connecting these vertices by some rules to form a graph. However, this method still relies on a series of processing on occupancy grid maps such as frontier detection to a certain extent, and when the environmental characteristics change, the efficiency of this graph-based method is not high.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…In [ 10 ], a Topology–Grid Hybrid Map (TGHM) scheme is formulated for autonomous exploration. Modern approaches, such as Reinforcement Learning (RL), have also been employed to tackle the exploration problem [ 11 ].…”
Section: Related Workmentioning
confidence: 99%