2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01252
|View full text |Cite
|
Sign up to set email alerts
|

Learning by Watching

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 23 publications
(7 citation statements)
references
References 41 publications
0
6
0
Order By: Relevance
“…Our idea of training the ego motion planner using data from all vehicles is closely related to Filos et al [18] and Zhang and Ohn-Bar [49]. Filos et al [18] extends offline reinforcement learning to learn from other agents' behaviors.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Our idea of training the ego motion planner using data from all vehicles is closely related to Filos et al [18] and Zhang and Ohn-Bar [49]. Filos et al [18] extends offline reinforcement learning to learn from other agents' behaviors.…”
Section: Related Workmentioning
confidence: 99%
“…Filos et al [18] extends offline reinforcement learning to learn from other agents' behaviors. Zhang and Ohn-Bar [49] train a privileged imitation learning policy that learns from other vehicles in a scene. Their policy…”
Section: Related Workmentioning
confidence: 99%
“…Observational Imitation Learning: Our key idea is to leverage the scale and diversity of easily available online ego-centric navigation data to learn a robust conditional imitation learning policy [13,17]. While learning from labeled demonstrations can significantly simplify the challenging vision-based policy learning task [3,6,11,12,31,32,41,44,49,50,54,58,60,80,83,84], observed images in our settings are not labeled with corresponding actions of a demonstrator. We therefore work to generalize current conditional imitation learning (CIL) approaches [13,17,18] to learn, from unlabeled image observations, an agent that can navigate in complex urban scenarios.…”
Section: Related Workmentioning
confidence: 99%
“…While there is a large body of literature on end-to-end RL on perception data for urban driving (Codevilla et al, 2018;Ohn-Bar et al, 2020;Codevilla et al, 2019;Chen et al, 2020;Zhang and Ohn-Bar, 2021;Prakash et al, 2021;Zhang and Ohn-Bar, 2021), there are significantly fewer works on the same topic for high-speed racing. We hypothesize this may partly be attributed to the lack of open-source, high-fidelity simulation environments for racing, in comparison to the ubiquity of CARLA simulator Dosovitskiy et al (2017a) for urban driving research.…”
Section: Related Workmentioning
confidence: 99%