2020
DOI: 10.1016/j.asoc.2020.106694
|View full text |Cite
|
Sign up to set email alerts
|

Real-time deep reinforcement learning based vehicle navigation

Abstract: This is a repository copy of Real-time deep reinforcement learning based vehicle routing and navigation.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 61 publications
(20 citation statements)
references
References 33 publications
(49 reference statements)
0
20
0
Order By: Relevance
“…Koh et al [11] presented a deep reinforcement learning method to enable real-time interaction between vehicles and complex urban environments. By defining tasks as a sequence of decisions, a real-time intelligent vehicle routing and navigation system is constructed.…”
Section: Reinforcement Learning In Routing Planningmentioning
confidence: 99%
“…Koh et al [11] presented a deep reinforcement learning method to enable real-time interaction between vehicles and complex urban environments. By defining tasks as a sequence of decisions, a real-time intelligent vehicle routing and navigation system is constructed.…”
Section: Reinforcement Learning In Routing Planningmentioning
confidence: 99%
“…The reinforcement learning paradigm fits naturally into the navigation tasks and has encouraged a wide range of practical studies. In this work we focus on vehicle navigation Koh et al (2020); Kiran et al (2021); Stafylopatis and Blekas (1998); Deshpande and Spalanzani (2019). Previous work can be separated into two major categories: navigation with human interaction Thomaz et al (2006); Wang et al (2003); Hemminahaus and Kopp (2017) and navigation without human interaction Kahn et al (2018); Ross et al (2008).…”
Section: Rl For Navigation Systemmentioning
confidence: 99%
“…We conduct extensive experiments (60+ hours) on real human participants with diverse driving experiences as shown in Fig. 1, which is significantly more challenging than most previous works that only perform experiments with simulated agents instead of real humans Koh et al (2020); Nagabandi et al (2018b). The user study shows that our algorithm outperforms the baselines in following optimal routes and it is better at enforcing safety where the collision rate is significantly reduced by more than 60%.…”
Section: Introductionmentioning
confidence: 99%
“…[18] formulated the dynamic shortest path problem as a mixed-integer programming problem. [19] has used RL for real-time vehicle navigation and routing. However, both of these studies have tackled a general routing problem, and signal pre-emption and its influence on traffic have not been modeled.…”
Section: Supplementary Materials S1 Related Workmentioning
confidence: 99%