2007
DOI: 10.1007/s11263-007-0042-3
|View full text |Cite
|
Sign up to set email alerts
|

Vision-Based SLAM: Stereo and Monocular Approaches

Abstract: Building a spatially consistent model is a key functionality to endow a mobile robot with autonomy. Without an initial map or an absolute localization means, it requires to concurrently solve the localization and mapping problems. For this purpose, vision is a powerful sensor, because it provides data from which stable features can be extracted and matched as the robot moves. But it does not directly provide 3D information, which is a difficulty for estimating the geometry of the environment. This article pres… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
103
0
1

Year Published

2007
2007
2019
2019

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 218 publications
(107 citation statements)
references
References 41 publications
0
103
0
1
Order By: Relevance
“…In theory, using stereo cameras [18,39] solves the problem of requiring the camera to travel, since the baseline required to triangulate features is built-in. In practice, however, using stereo cameras is only a partial remedy, since the baseline has to be significant in relation to the distance to the environment in order to reliably estimate depth.…”
Section: Further Related Workmentioning
confidence: 99%
“…In theory, using stereo cameras [18,39] solves the problem of requiring the camera to travel, since the baseline required to triangulate features is built-in. In practice, however, using stereo cameras is only a partial remedy, since the baseline has to be significant in relation to the distance to the environment in order to reliably estimate depth.…”
Section: Further Related Workmentioning
confidence: 99%
“…Ego-motion is provided by either visual odometry [1,15], SLAM [21,22,23] (self localization and mapping), or the inertial motion sensors of the vehicle. Stixel motion is obtained by computing optical flow correspondences.…”
Section: Figmentioning
confidence: 99%
“…Though using vision-based information "all the time" was initially explored (e.g., using video frame-to-frame relative rotation/translation information [4], or using more sophisticated vision SLAM approaches [5], [6]), this path was not pursued further due to concerns about both processing power requirements and overall robustness. The vision algorithms currently in the system have instead been implemented as a module that may or may not provide measurements, depending on the circumstances.…”
Section: Introductionmentioning
confidence: 99%