Proceedings of the 2005 IEEE International Conference on Robotics and Automation
DOI: 10.1109/robot.2005.1570627
|View full text |Cite
|
Sign up to set email alerts
|

Localization for Mobile Robots using Panoramic Vision, Local Features and Particle Filter

Abstract: Abstract-In this paper we present a vision-based approach to self-localization that uses a novel scheme to integrate featurebased matching of panoramic images with Monte Carlo localization. A specially modified version of Lowe's SIFT algorithm is used to match features extracted from local interest points in the image, rather than using global features calculated from the whole image. Experiments conducted in a large, populated indoor environment (up to 5 persons visible) over a period of several months demons… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
73
0
1

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 89 publications
(74 citation statements)
references
References 13 publications
0
73
0
1
Order By: Relevance
“…This indicates that the method can cope with 3-d motions to a certain extent, and we would expect a graceful degradation in map accuracy as the roughness of the terrain increases. The representation should still be useful for self-localization using 2-d odometry and image similarity, e.g., using the global localization method in [18]. In extreme cases, of course, it is possible that the method would create inconsistent maps, and a 3-d representation should be considered.…”
Section: Discussionmentioning
confidence: 99%
“…This indicates that the method can cope with 3-d motions to a certain extent, and we would expect a graceful degradation in map accuracy as the roughness of the terrain increases. The representation should still be useful for self-localization using 2-d odometry and image similarity, e.g., using the global localization method in [18]. In extreme cases, of course, it is possible that the method would create inconsistent maps, and a 3-d representation should be considered.…”
Section: Discussionmentioning
confidence: 99%
“…Vision-based odometry or localization algorithms are usually evaluated using either front-facing cameras [1], [11], [16], [19] or cameras pointed to the side [7], [20], [21] and using average focal length lenses. Some researchers have proposed using panoramic [3], [18] or omni-directional imagery [15], which could potentially improve localization by using a much wider field of view (FOV). However, our analysis shows that a wider FOV does not necessarily improve localization accuracy.…”
Section: Related Workmentioning
confidence: 99%
“…Two broad methods can be discerned -those that model local features [2] and distinctive parts of images [26], and those that extract global representations of images and learn from them [22][10] [3]. The latter is closer to this work and includes the CENTRISTbased VPC system [33], and Spatial Pyramid Matching [15].…”
Section: Related Workmentioning
confidence: 99%