2019 International Conference on Robotics and Automation (ICRA) 2019
DOI: 10.1109/icra.2019.8793668
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Drive from Simulation without Real World Labels

Abstract: Simulation can be a powerful tool for understanding machine learning systems and designing methods to solve real-world problems. Training and evaluating methods purely in simulation is often "doomed to succeed" at the desired task in a simulated environment, but the resulting models are incapable of operation in the real world. Here we present and evaluate a method for transferring a vision-based lane following driving policy from simulation to operation on a rural road without any real-world labels. Our appro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
55
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 63 publications
(55 citation statements)
references
References 31 publications
0
55
0
Order By: Relevance
“…The raw objects are represented by a list of points with global and local coordinates, normals, colors attributes and semantic labelsOther works synthesize challenges on CAD data by introducing noise simulated by Gaussians [4,12] or created with a parametic model [6]. Recently, the trend of sim2real [3] also aims to bridge the gap between synthetic and real data. As in the experiment with synthetic data, we sample all raw objects to 1024 points as input to the networks and all methods were trained using only the local (x, y, z) coordinates.…”
Section: Data Collectionmentioning
confidence: 99%
“…The raw objects are represented by a list of points with global and local coordinates, normals, colors attributes and semantic labelsOther works synthesize challenges on CAD data by introducing noise simulated by Gaussians [4,12] or created with a parametic model [6]. Recently, the trend of sim2real [3] also aims to bridge the gap between synthetic and real data. As in the experiment with synthetic data, we sample all raw objects to 1024 points as input to the networks and all methods were trained using only the local (x, y, z) coordinates.…”
Section: Data Collectionmentioning
confidence: 99%
“…Given visual observations of the environment (i.e., camera images), our system learns a lane-stable control policy This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ over a wide variety of different road and environment types, as opposed to current end-to-end systems [2], [3], [8], [9] which only imitate human behavior. This is a major advancement as there does not currently exist a scalable method for training autonomous vehicle control policies that go beyond imitation learning and can generalize to and navigate in previously unseen road and complex, near-crash situations.…”
Section: Fig 1 Training and Deployment Of Policies From Data-driven Simulationmentioning
confidence: 99%
“…Training agents in simulation capable of robust generalization when deployed in the real world is a long-standing goal in many areas of robotics [9]- [12]. Several works have demonstrated transferable policy learning using domain randomization [13] or stochastic augmentation techniques [14] on smaller mobile robots.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Accordingly, the driving policy can be easily adapted to real environment. A method for transferring a vision-based lane following driving policy from a simulated to a real environment is presented in [40] where a model for end-to-end driving is constructed by learning to translate between simulated and real images, jointly learning a control policy from this common latent space using labels from an expert driver in the simulated environment. It was shown that the proposed system is capable of leveraging simulation to learn a driving policy to directly transfer to real world scenarios.…”
Section: Advanced Data Augmentationmentioning
confidence: 99%