1996
DOI: 10.1109/70.481751
|View full text |Cite
|
Sign up to set email alerts
|

Mobile robot self-location using model-image feature correspondence

Abstract: The problem of establishing reliable and accurate correspondence between a stored 3-D model and a 2-D image of it is important in many computer vision tasks, including model-based object recognition, autonomous navigation, pose estimation, airborne surveillance, and reconnaissance. This paper presents an approach to solving this problem in the context of autonomous navigation of a mobile robot in an outdoor urban, man-made environment. The robot's environment is assumed consist of polyhedral buildings. The 3-D… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
27
0

Year Published

1997
1997
2006
2006

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 78 publications
(29 citation statements)
references
References 28 publications
0
27
0
Order By: Relevance
“…Sensor panning problems require considering a number of constraints, first of all the visibility constraint. Although in general the problem addressed is 3D, in some cases it can be restricted to 2D [2,7,11]. This is for instance the case of buildings, which can be modeled as objects obtained by extrusion.…”
Section: Introductionmentioning
confidence: 99%
“…Sensor panning problems require considering a number of constraints, first of all the visibility constraint. Although in general the problem addressed is 3D, in some cases it can be restricted to 2D [2,7,11]. This is for instance the case of buildings, which can be modeled as objects obtained by extrusion.…”
Section: Introductionmentioning
confidence: 99%
“…The most commonly used sensors are sonar [1], [2], laser or infrared rangefinders [3]- [5], monocular vision [6]- [10], and stereo vision [11]- [14]. In this work, we show that multisensor integration using simple and inexpensive sensor processing, allows a mobile robot to precisely locate itself, and at the same time makes the robotic system more robust.…”
mentioning
confidence: 78%
“…Cooperation of both sensors can be used to obtain better precision and robustness in the extraction of high level features such as corners or doors [15]. 2) Matching observations with the map is known to be a difficult problem, especially with monocular vision systems [6], [10] because they obtain very incomplete information about the location of natural landmarks. However, this problem is much simpler using a laser rangefinder.…”
mentioning
confidence: 99%
“…This can be done by first establishing the matching by human interaction and then tracing the selected features frameby-frame in the course of the navigation (provided they are always visible from the robot). Otherwise, the robot hypothesizes a match based on known clues (e.g., brightness, color, and shape) and then validates the resulting 3-D position; e.g., by comparing it with that obtained by integrating the history of the motion, or examining the image if features that should be observed from that position actually exist (Yagi et al 1995;Talluri and Aggarwal 1996).…”
Section: Feature-matching Proceduresmentioning
confidence: 99%