2012
DOI: 10.1002/rob.21444
|View full text |Cite
|
Sign up to set email alerts
|

Lighting‐invariant Visual Teach and Repeat Using Appearance‐based Lidar

Abstract: Visual Teach and Repeat (VT&R) is an effective method to enable a vehicle to repeat any previously driven route using just a visual sensor and without a global positioning system. However, one of the major challenges in recognizing previously visited locations is lighting change, as this can drastically alter the appearance of the scene. In an effort to achieve lighting invariance, this paper details the design of a VT&R system that uses a laser scanner as the primary sensor. Unlike a traditional scan‐matching… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
26
0

Year Published

2013
2013
2018
2018

Publication Types

Select...
6
1

Relationship

7
0

Authors

Journals

citations
Cited by 37 publications
(28 citation statements)
references
References 68 publications
2
26
0
Order By: Relevance
“…It is possible for the quality of the INS calibration relative to the camera system to degrade, as well as for biases to be introduced by the nature of the DGPS system that is employed within the INS. Similar difficulties using DGPS for ground‐truth assessment have been reported in other papers (Churchill & Newman, ; McManus, Furgale, Stenning, & Barfoot, ). We have omitted several recordings from the evaluation, where clear biases, or missing samples, were observed for the INS.…”
Section: Resultssupporting
confidence: 80%
“…It is possible for the quality of the INS calibration relative to the camera system to degrade, as well as for biases to be introduced by the nature of the DGPS system that is employed within the INS. Similar difficulties using DGPS for ground‐truth assessment have been reported in other papers (Churchill & Newman, ; McManus, Furgale, Stenning, & Barfoot, ). We have omitted several recordings from the evaluation, where clear biases, or missing samples, were observed for the INS.…”
Section: Resultssupporting
confidence: 80%
“…There are a number of ways to address long‐term robustness, one of which is using alternative sensing modalities. For example, lidar has good lighting invariance, and can be used as a standalone sensor (Barfoot et al, ; Maddern, Pascoe, & Newman, ; McManus, Furgale, Stenning, & Barfoot, ), complementary sensor, or to build high‐fidelity maps against which to localize using cameras (Pascoe, Maddern, Stewart, & Newman, ; Wolcott & Eustice, ). This paper focuses on pure vision, motivated by exploring how far we can take a single sensor before worrying about integration in a multisensor system.…”
Section: Introductionmentioning
confidence: 99%
“…Furgale and Barfoot's system has been extended to other 3D sensors such as lidar [16] and RGB-D cameras, but a monocular implementation has not been forthcoming. While monocular cameras are appealing in terms of size, cost, and simplicity, perhaps the most compelling motivation for using monocular vision for VT&R is the plethora of existing mobile robots that would benefit from it.…”
Section: Introductionmentioning
confidence: 99%