2012 IEEE International Conference on Robotics and Automation 2012
DOI: 10.1109/icra.2012.6224864
|View full text |Cite
|
Sign up to set email alerts
|

Using depth in visual simultaneous localisation and mapping

Abstract: Abstract-We present a method of utilizing depth information as provided by RGBD sensors for robust real-time visual simultaneous localisation and mapping (SLAM) by augmenting monocular visual SLAM to take into account depth data. This is implemented based on the feely available software "Parallel Tracking and Mapping" by Georg Klein. Our modifications allow PTAM to be used as a 6D visual SLAM system even without any additional information about odometry or from an inertial measurement unit.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
5
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 33 publications
(22 citation statements)
references
References 12 publications
0
22
0
Order By: Relevance
“…There exists, however, at least one example for sparse stereo matching with a forwardfacing camera pair [14]. Here, the sparsely matched features are used for a modified version of the SLAM method presented in [17], which itself is an extension of PTAM that incorporates depth information. Because sparse stereo matching is much faster than any dense algorithm, this MAV can maintain a high pose estimation rate of 30 Hz.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…There exists, however, at least one example for sparse stereo matching with a forwardfacing camera pair [14]. Here, the sparsely matched features are used for a modified version of the SLAM method presented in [17], which itself is an extension of PTAM that incorporates depth information. Because sparse stereo matching is much faster than any dense algorithm, this MAV can maintain a high pose estimation rate of 30 Hz.…”
Section: Related Workmentioning
confidence: 99%
“…2 The successfully matched features are used for estimating the current MAV pose. A SLAM system is employed for this task, which is based on the method proposed in [17]. This method is an adaptation of PTAM that incorporates depth information, which we receive from stereo matching.…”
Section: Processing Of Forward-facing Camerasmentioning
confidence: 99%
“…KinectFusion [3], which uses coarse-to-fine iterative closest point with projective data association implemented on graphics hardware for registration with the map. The authors themselves in [4] introduced bundle adjustment with depth constraints, which allows us to easily extend monocular visual SLAM systems like the very efficient PTAM system [5] to also utilize depth measurements of RGBD data. This method was shown to enable autonomous flight of a MAV with a stereo camera in [6], but has some systematic limitations: For optimizing the full map, PTAM relies on global bundle adjustment, which quickly becomes computationally infeasible in real time for large numbers of keyframes.…”
Section: Related Workmentioning
confidence: 99%
“…Deviating from [4], we need to modify our bundle adjustment formulation to accomodate the relative representation chosen for this work. Given two keyframes, let us call map points that were measured not only in their source keyframe S but also in the other keyframe O relevant map points.…”
Section: B Relative Bundle Adjustment With Depth Constraintsmentioning
confidence: 99%
See 1 more Smart Citation