2018 IEEE International Conference on Consumer Electronics (ICCE) 2018
DOI: 10.1109/icce.2018.8326229
|View full text |Cite
|
Sign up to set email alerts
|

End-to-end deep learning for autonomous navigation of mobile robot

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(22 citation statements)
references
References 15 publications
0
22
0
Order By: Relevance
“…The combination of visual information and DRL mechanism can implicitly accomplish engineering projects such as localization, mapping, and path planning in end-to-end manner, with environment information embedded in network parameters. Ye-Hoon Kim [20] propose an end-to-end navigation method to extract visual features directly from images by the camera, which greatly reduces the power consumption and computational time. The experiment is performed in an office scene with a simplified DRL model, achieving satisfying results.…”
Section: Related Work a Drl Navigationmentioning
confidence: 99%
“…The combination of visual information and DRL mechanism can implicitly accomplish engineering projects such as localization, mapping, and path planning in end-to-end manner, with environment information embedded in network parameters. Ye-Hoon Kim [20] propose an end-to-end navigation method to extract visual features directly from images by the camera, which greatly reduces the power consumption and computational time. The experiment is performed in an office scene with a simplified DRL model, achieving satisfying results.…”
Section: Related Work a Drl Navigationmentioning
confidence: 99%
“…It is worth mentioning that some emerging studies [113][114][115] propose different driving algorithms that avoid the need for localization and mapping stages, and instead sense the environment and directly produce end-to-end driving decisions. This is known as the behavior reflex approach [113].…”
Section: Ego-localization and Mappingmentioning
confidence: 99%
“…Several works have used a single perception sensor such as 2-D lidar or raw data from a depth camera to train collision avoidance behaviors in a robot using expert demonstrations [Pfeiffer et al, 2016] and imitation learning [Tai et al, 2018]. End-to-end methods [Kim et al, 2018], and methods using deep double-Q networks [Xie et al, 2017] [Schulman et al, 2017] using a 2-D lidar in [Long et al, 2017]. This approach was extended to a hybrid control architecture [Fan et al, 2018b], which switched between different policies to optimize the navigation.…”
Section: Learning-based Collision Avoidancementioning
confidence: 99%
“…In recent years, there has been significant work on learning-based collision avoidance for mobile robots operating in such dense scenarios. These include techniques based on end-to-end deep learning [Kim et al, 2018;Pfeiffer et al, 2017], generative adversarial imitation learning [Tai et * Authors contributed equally.…”
Section: Introductionmentioning
confidence: 99%