2016
DOI: 10.1002/rob.21669
|View full text |Cite
|
Sign up to set email alerts
|

Expanding the Limits of Vision‐based Localization for Long‐term Route‐following Autonomy

Abstract: Vision‐based, autonomous, route‐following algorithms enable robots to autonomously repeat manually driven routes over long distances. Through the use of inexpensive, commercial vision sensors, these algorithms have the potential to enable robotic applications across multiple industries. However, in order to extend these algorithms to long‐term autonomy, they must be able to operate over long periods of time. This poses a difficult challenge for vision‐based systems in unstructured and outdoor environments, whe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
33
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 32 publications
(35 citation statements)
references
References 34 publications
1
33
0
Order By: Relevance
“…Our direct localization pipeline operates in both mapping (VO) and relocalization modes in a similar vein to topometric visual teach-and-repeat navigation [13], [14], where the camera follows a similar trajectory during both mapping and relocalization phases. As the camera explores the environment in mapping mode, we generate a list of posed keyframes with corresponding image and depth data, creating new keyframes when the translational or rotational distance between the most recent keyframe pose and the current tracking pose exceeds a preset threshold.…”
Section: Keyframe Mapping and Relocalizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Our direct localization pipeline operates in both mapping (VO) and relocalization modes in a similar vein to topometric visual teach-and-repeat navigation [13], [14], where the camera follows a similar trajectory during both mapping and relocalization phases. As the camera explores the environment in mapping mode, we generate a list of posed keyframes with corresponding image and depth data, creating new keyframes when the translational or rotational distance between the most recent keyframe pose and the current tracking pose exceeds a preset threshold.…”
Section: Keyframe Mapping and Relocalizationmentioning
confidence: 99%
“…To this end, we investigated the use of our trained CAT models for teach-and-repeat-style metric relocalization [13], [14] by first creating a map in the canonical condition, then relocalizing against it in different conditions using both the original and transformed images. Figure 6 shows sample relocalization errors for the "Morning" and "Sunset" conditions of the VKITTI/0001 trajectory using both the original image streams (resized and cropped to 256 × 192 resolution) and the output of our trained model, while Table II summarizes our relocalization results for all ETHL/syn and VKITTI sequences.…”
Section: B Visual Relocalizationmentioning
confidence: 99%
“…The problems of navigating without GPS and a highly detailed map can be addressed separately in a less computationally expensive manner. Techniques have been proposed in recent years based on accurate self‐localization in mapped environments in an attempt to reduce the vehicle's reliance upon GPS . However, these methods still rely heavily on a priori map information, as they require either a lidar or vision sensor layer to be included in the environmental map.…”
Section: Introductionmentioning
confidence: 99%
“…Techniques have been proposed in recent years based on accurate self-localization in mapped environments in an attempt to reduce the vehicle's reliance upon GPS. [20][21][22][23][24] However, these methods still rely heavily on a priori map information, as they require either a lidar or vision sensor layer to be included in the environmental map. Likewise, sufficient localization in an extended GPS blackout can be provided by a posterior pose algorithm, which augments GPS and an inertial navigation system with vision-based measurements of nearby lanes and stop lines referenced to a known map of environmental features.…”
Section: Introductionmentioning
confidence: 99%
“…The work in Ref. presents an algorithm that is invariant to lighting conditions, but it relies on feature points and requires multiple stereo cameras. The work in Refs.…”
Section: Introductionmentioning
confidence: 99%