2006
DOI: 10.1002/rob.20159
|View full text |Cite
|
Sign up to set email alerts
|

The visual compass: Performance and limitations of an appearance‐based method

Abstract: In this article we present an algorithm to estimate the orientation of a robot relative to an orientation specified at the beginning of the process. This is done by computing the rotation of the robot between successive panoramic images, grabbed on the robot while it moves, using a subsymbolic method to match the images. The context of the work is simultaneous localization and mapping ͑SLAM͒ in unstructured and unmodified environments. As such, very few assumptions are made about the environment and the robot'… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
72
0
2

Year Published

2009
2009
2020
2020

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 59 publications
(76 citation statements)
references
References 29 publications
0
72
0
2
Order By: Relevance
“…As the visual compass described in [10,11] only compares the latest two images, and therefore small errors in estimation add up over time, making their results difficult to compare.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…As the visual compass described in [10,11] only compares the latest two images, and therefore small errors in estimation add up over time, making their results difficult to compare.…”
Section: Discussionmentioning
confidence: 99%
“…More closely related to our work, an appearance-based visual compass is developed in [10,11]. The robot compares each captured image with the previous one, and computes the Manhattan distance between the two images.…”
Section: Related Workmentioning
confidence: 99%
“…• Several visual compass methods have been developed-most of them in the DID framework-which extract the azimuthal orientation difference from two panoramic images [25,26,[35][36][37][38][43][44][45][46][47]. Two panoramic images are rotated relative to each other in azimuthal direction and a pixel-wise distance is computed.…”
Section: Feature-based Vs Holistic Methodsmentioning
confidence: 99%
“…Vertical and horizontal differences are simply stacked in a joint vector and compared by the difference measure. An example closely related to min-warping is the visual compass [25,44]. Here entire images-in this case sets of all horizontal and vertical differences-are compared by the difference measure for varying relative azimuthal orientation between the two images.…”
Section: Correlation Of Edge-filtered 2d Image Patchesmentioning
confidence: 99%
“…20 A slightly different approach is the template matching method. [22][23][24] It avoids the problem of finding and tracking features, and instead it looks at the change in the appearance of the world (images). For that purpose, it takes a template or patch from an image and tries to match it in the previous image.…”
Section: Methodsmentioning
confidence: 99%