2019
DOI: 10.1007/978-3-030-35699-6_11
|View full text |Cite
|
Sign up to set email alerts
|

Collision Avoidance for Indoor Service Robots Through Multimodal Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 15 publications
0
8
0
Order By: Relevance
“…Kulahanek et al 26 took this idea further by removing the scene specific layers and using ideas of training with auxiliary tasks for learning better feature extraction from the image. Leiva et al 27 train a collision policy which as input combines the date from the RGB, depth image, and lasers on the robot.…”
Section: Related Workmentioning
confidence: 99%
“…Kulahanek et al 26 took this idea further by removing the scene specific layers and using ideas of training with auxiliary tasks for learning better feature extraction from the image. Leiva et al 27 train a collision policy which as input combines the date from the RGB, depth image, and lasers on the robot.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, great focus has been put into learning-based navigation systems [15], [14], which aim to improve their overall performance and generalize to unseen environments. Although in the machine learning community there have been several works related to visual navigation in either game-like domains [16], [10] or robotic environments [17], [18], the robotics community has been reluctant to adopt these systems, believing classic methods to perform better for generic tasks [19]. In this context, [5] shows that with sufficient training, RL-based methods can outperform classic methods on fair settings, and [20] shows that RL agents can solve visual navigation tasks with an almost perfect SPL score [15] in simulations.…”
Section: Related Work a Visual Navigationmentioning
confidence: 99%
“…However, these approaches usually use lasers as feature vectors [22] and are only valid for tasks in which the 2D observations are enough. Following the idea of using depth sensors for a seamless transfer, the use of depth cameras is logical and have been used in several works [5], [18]. However, the nature of these sensors differs greatly from the real ones (e.g., reflections, invalid values, etc.…”
Section: B Sim-to-realmentioning
confidence: 99%
“…Q-Values in the algorithm are represented in tabular form which requires large memory spaces and difficult mathematical calculations. A deep reinforcement method is implemented in [8] for collision avoidance for an indoor service robot. The controller is parameterized using the neural network while DDPG is used to train the agent.…”
Section: Introductionmentioning
confidence: 99%