2016
DOI: 10.1177/0278364916679498
|View full text |Cite
|
Sign up to set email alerts
|

1 year, 1000 km: The Oxford RobotCar dataset

Abstract: We present a challenging new dataset for autonomous driving: the Oxford RobotCar Dataset. Over the period of May 2014 to December 2015 we traversed a route through central Oxford twice a week on average using the Oxford RobotCar platform, an autonomous Nissan LEAF. This resulted in over 1000km of recorded driving with almost 20 million images collected from 6 cameras mounted to the vehicle, along with LIDAR, GPS and INS ground truth. Data was collected in all weather conditions, including heavy rain, night, di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
1,112
0
1

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 1,319 publications
(1,116 citation statements)
references
References 27 publications
3
1,112
0
1
Order By: Relevance
“…For example, the images are usually taken from arbitrary angles and are not annotated with navigation information or paired with other sensor data. In response to this need, the robotics community has released a number of datasets focusing on active areas of research such as autonomous driving (Cordts et al, ; Geiger, Lenz, Stiller, & Urtasun, ; X. Huang et al, ; Maddern, Pascoe, Linegar, & Newman, ; Pandey, McBride, & Eustice, ; Yu et al, ). By nature, these data are domain specific and do not necessarily translate to agricultural applications.…”
Section: Introductionmentioning
confidence: 99%
“…For example, the images are usually taken from arbitrary angles and are not annotated with navigation information or paired with other sensor data. In response to this need, the robotics community has released a number of datasets focusing on active areas of research such as autonomous driving (Cordts et al, ; Geiger, Lenz, Stiller, & Urtasun, ; X. Huang et al, ; Maddern, Pascoe, Linegar, & Newman, ; Pandey, McBride, & Eustice, ; Yu et al, ). By nature, these data are domain specific and do not necessarily translate to agricultural applications.…”
Section: Introductionmentioning
confidence: 99%
“…Our approach differentiates itself from existing solutions on various fronts as shown in Table I. We evaluate the performance of our proposed approach on various publicly-available datasets including the KITTI dataset [21], the Multi-FOV synthetic dataset [27] (pinhole, fisheye, and catadioptric lenses), an omnidirectionalcamera dataset [28], and on the Oxford Robotcar 1000km Dataset [29].…”
Section: Methodsmentioning
confidence: 99%
“…In our experiments, we found that while our proposed solution was sufficiently powerful to model different camera optics, it was significantly better at modeling pinhole lenses as compared to fisheye and Sensor fusion with learned ego-motion: On fusing our proposed VO method with intermittent GPS updates (every 150 frames, black circles), the pose-graph optimized ego-motion solution (in green) achieves sufficiently high accuracy relative to ground truth. We test on a variety of publicly-available datasets including (a) Multi-FOV synthetic dataset [27] (pinhole shown above), (b) an omnidirectional-camera dataset [28], (c) Oxford Robotcar 1000km Dataset [29] (2015-11-13-10-28-08) (d-h) KITTI dataset [21]. Weak supervision such as GPS measurements can be especially advantageous in recovering improved estimates for localization, while simultaneously minimizing uncertainties associated with pure VO-based approaches.…”
Section: B Varied Camera Opticsmentioning
confidence: 99%
See 1 more Smart Citation
“…The Oxford data set (Maddern et al, ): Provided by Oxford University, UK, the data set collection spanned over 1 year, resulting in over 1,000 km of recorded driving with almost 20 million images collected from six cameras mounted to the vehicle, along with LiDAR, GPS, and INS ground truth. Data were collected in all weather conditions, including heavy rain, night, direct sunlight, and snow.…”
Section: Data Sources For Training Autonomous Driving Systemsmentioning
confidence: 99%