2016
DOI: 10.1007/978-3-319-27702-8_36
|View full text |Cite
|
Sign up to set email alerts
|

Monocular Visual Teach and Repeat Aided by Local Ground Planarity

Abstract: Visual Teach and Repeat (VT\&R) allows an autonomous vehicle to repeat a previously traversed route without a global positioning system. Existing implementations of VT\&R typically rely on 3D sensors such as stereo cameras for mapping and localization, but many mobile robots are equipped with only 2D monocular vision for tasks such as teleoperated bomb disposal. While simultaneous localization and mapping (SLAM) algorithms exist that can recover 3D structure and motion from monocular images, the scale ambiguit… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 26 publications
0
10
0
Order By: Relevance
“…In our previous conference paper (Clement et al., ), we noted that our monocular VT&R system is less robust to lighting and self‐similar textures than its stereo counterpart. In this work, we address this tradeoff through the use of color‐constant imagery that is robust to shadows and illumination changes in scenes with vegetation, rocks, and sand (Paton et al., ; Ratnasingam & Collins, ).…”
Section: Discussionmentioning
confidence: 78%
See 2 more Smart Citations
“…In our previous conference paper (Clement et al., ), we noted that our monocular VT&R system is less robust to lighting and self‐similar textures than its stereo counterpart. In this work, we address this tradeoff through the use of color‐constant imagery that is robust to shadows and illumination changes in scenes with vegetation, rocks, and sand (Paton et al., ; Ratnasingam & Collins, ).…”
Section: Discussionmentioning
confidence: 78%
“…Their system has since been extended to other natively three‐dimensional (3D) sensor configurations including appearance‐based lidar (McManus, Furgale, Stenning, & Barfoot, ), multiple stereo cameras (Paton, Pomerleau, & Barfoot, ), and RGB‐D cameras. Recently, Clement, Kelly, and Barfoot () investigated the use of a much simpler sensor configuration—a single 2D monocular camera—in a VT&R system, with 3D information inferred from approximations of local scene geometry and the known position and orientation of the camera relative to the vehicle.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This approach aims to land on an unstructured surface using a position estimate relative to the take-off path of the drone to guide the drone back. The method is inspired by Visual Teach and Repeat [16], where the take-off is the teach pass and the landing is the repeat pass. Fraczek et al [17] presents an embedded Vision system for automated drone landing site detection.…”
Section: Landing On Unstructured Areamentioning
confidence: 99%
“…Combining both teach-and-replay feature-based method and a segmentation-based approach, De Cristóforis et al (2015) developed an autonomous navigation method, as an improved version of Chen and Birchfield (2009) method, capable of operating in both indoor and outdoor environment. Following the same logic, teach and replay based technique aided by local ground planarity is used by Clement et al (2015) for an autonomous navigation system experimented in both indoor and outdoor environments. However, as mentioned by De Cristóforis et al (2015), the main drawback of the teach and replay methods lies in the fact that the robot workspace is limited only to the regions mapped during the training step.…”
Section: Introductionmentioning
confidence: 99%