2018
DOI: 10.1007/978-3-030-01234-2_27
|View full text |Cite
|
Sign up to set email alerts
|

End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners

Abstract: For human drivers, having rear and side-view mirrors is vital for safe driving. They deliver a more complete view of what is happening around the car. Human drivers also heavily exploit their mental map for navigation. Nonetheless, several methods have been published that learn driving models with only a front-facing camera and without a route planner. This lack of information renders the self-driving task quite intractable. We investigate the problem in a more realistic setting, which consists of a surround-v… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
182
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 166 publications
(195 citation statements)
references
References 79 publications
(98 reference statements)
1
182
0
1
Order By: Relevance
“…End2End control papers mainly employ either deep neural networks trained offline on real‐world and/or synthetic data (Bechtel et al, ; Bojarski et al, ; C. Chen, Seff, Kornhauser, & Xiao, ; Eraqi et al, ; Fridman et al, ; Hecker et al, ; Rausch et al, ; Xu et al, ; S. Yang et al, ), or DRL systems trained and evaluated in simulation (Jaritz et al, ; Perot, Jaritz, Toromanoff, & Charette, ; Sallab et al, 2017b). Methods for porting simulation trained DRL models to real‐world driving have also been reported (Wayve, 2018), as well as DRL systems trained directly on real‐world image data (Pan et al, , ).…”
Section: Motion Controllers For Ai‐based Self‐driving Carsmentioning
confidence: 99%
See 1 more Smart Citation
“…End2End control papers mainly employ either deep neural networks trained offline on real‐world and/or synthetic data (Bechtel et al, ; Bojarski et al, ; C. Chen, Seff, Kornhauser, & Xiao, ; Eraqi et al, ; Fridman et al, ; Hecker et al, ; Rausch et al, ; Xu et al, ; S. Yang et al, ), or DRL systems trained and evaluated in simulation (Jaritz et al, ; Perot, Jaritz, Toromanoff, & Charette, ; Sallab et al, 2017b). Methods for porting simulation trained DRL models to real‐world driving have also been reported (Wayve, 2018), as well as DRL systems trained directly on real‐world image data (Pan et al, , ).…”
Section: Motion Controllers For Ai‐based Self‐driving Carsmentioning
confidence: 99%
“…This kind of processing allows the training of large and complex network architectures, which in turn require huge amounts of training samples (see Section 8).End2End control papers mainly employ either deep neural networks trained offline on real-world and/or synthetic data(Bechtel et al, 2018;Bojarski et al, 2016;C. Chen, Seff, Kornhauser, & Xiao, 2015;Eraqi et al, 2017;Fridman et al, 2017;Hecker et al, 2018;Rausch et al, 2017;Xu et al, 2017;S. Yang et al, 2017a), or DRL systems trained and evaluated in simulation(Jaritz et al, 2018;Perot, Jaritz, Toromanoff, & Charette, 2017;Sallab et al, 2017b).…”
mentioning
confidence: 99%
“…Not only is this impractical if our goal is to continue in the left direction, but can result in unsafe behaviour where the DNN oscillates between left and right but never picking either direction. Aiming to provide autonomous vehicles with contextual awareness, Hecker et al [137] collected a data set with a 360-degree view from 8 cameras and a driver following a route plan. This data set was then used to train a DNN to predict steering wheel angle and velocities from example images and route plans in the data set.…”
Section: Simultaneous Lateral and Longitudinal Control Systemsmentioning
confidence: 99%
“…Hecker et al . [HDVG18] learn a novel end‐to‐end driving model by integrating the information from surrounding 360‐degrees view cameras into the route planner. The network used in this approach directly maps the sensor outputs to low‐level driving maneuvers including steering angles and speed.…”
Section: Applications In Autonomous Drivingmentioning
confidence: 99%
“…Drive360 [HDVG18] includes 60 h of driving video from eight surround-view cameras. Low-level driving maneuvers (e.g.…”
Section: The Authors Computer Graphics Forum C 2019 Eurographimentioning
confidence: 99%