2005
DOI: 10.1109/tro.2004.839228
|View full text |Cite
|
Sign up to set email alerts
|

Vision-based global localization and mapping for mobile robots

Abstract: Abstract-We have previously developed a mobile robot system which uses scale-invariant visual landmarks to localize and simultaneously build three-dimensional (3-D) maps of unmodified environments. In this paper, we examine global localization, where the robot localizes itself globally, without any prior location estimate. This is achieved by matching distinctive visual landmarks in the current frame to a database map. A Hough transform approach and a RANSAC approach for global localization are compared, showi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
201
0
4

Year Published

2006
2006
2022
2022

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 423 publications
(212 citation statements)
references
References 32 publications
1
201
0
4
Order By: Relevance
“…Landmark-based localization methods rely on the assumption that landmarks can be detected and accurately interpreted from raw sensor readings [2], [5]. However interpretation from sensor readings to accurate geometric representation is complex and error prone.…”
Section: Robot Navigationmentioning
confidence: 99%
“…Landmark-based localization methods rely on the assumption that landmarks can be detected and accurately interpreted from raw sensor readings [2], [5]. However interpretation from sensor readings to accurate geometric representation is complex and error prone.…”
Section: Robot Navigationmentioning
confidence: 99%
“…Stephen Se et al proposed vision based simultaneous localization and mapping by tracking SIFT (Scale Invariant Feature Transform) features [15]. Our approach is to make 3D submaps tracking SURF features and recover depth.…”
Section: Previous Workmentioning
confidence: 99%
“…The SIFT algorithm has become very popular in several robotics applications, as it can be seen in Se et al (2001), Se et al (2005), Ledwich and Williams (2004), and introduces several properties of invariance that are specially useful when extracting features directly from omnidirectional images, as it is the case in this work. Rotational invariance is important because detected objects can appear in any orientation depending on the angle between them and the robot, and so is scale invariance since resolution rapidly decreases in the outer ring of the image, changing the apparent size of observed objects.…”
Section: Feature Extractionmentioning
confidence: 99%