2018
DOI: 10.1007/978-3-030-01225-0_21
|View full text |Cite
|
Sign up to set email alerts
|

Linear RGB-D SLAM for Planar Environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 68 publications
(40 citation statements)
references
References 32 publications
0
39
0
Order By: Relevance
“…We compared our proposed approach with five methods: ORB-SLAM2 [10], DVO [24], InfiniTAM [25], LPVO [27], and L-SLAM [28]. ORB-SLAM2 is a state-of-the-art point-based SLAM system; DVO estimates the robust poses with photometric and depth error by using the color and depth images together; InfiniTAM estimates the camera poses from the RGB and depth images with a GPU in real time; LPVO exploits the line and plane to estimate the zero-drift rotation and then estimates the 3D poses with tracked points in the MW scenes; L-SLAM estimates the camera position and plane landmarks with a linear SLAM formulation in the MW environments.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We compared our proposed approach with five methods: ORB-SLAM2 [10], DVO [24], InfiniTAM [25], LPVO [27], and L-SLAM [28]. ORB-SLAM2 is a state-of-the-art point-based SLAM system; DVO estimates the robust poses with photometric and depth error by using the color and depth images together; InfiniTAM estimates the camera poses from the RGB and depth images with a GPU in real time; LPVO exploits the line and plane to estimate the zero-drift rotation and then estimates the 3D poses with tracked points in the MW scenes; L-SLAM estimates the camera position and plane landmarks with a linear SLAM formulation in the MW environments.…”
Section: Resultsmentioning
confidence: 99%
“…In the work of Kim et al [27], lines and planes were exploited to estimate drift-free rotation, and the translation was recovered by minimizing the de-rotated reprojection error. Kim et al [28] also proposed a linear SLAM method based on the Bayesian filtering framework for MW scenes. These methods have produced good SLAM performance results in MW scenes, but if the MW assumption is invalid, MW-based methods fail to estimate the pose or reconstruct the map.…”
Section: Related Workmentioning
confidence: 99%
“…ORB-SLAM2 [7] is the state-of-the-art feature-point based visual SLAM system and it has a RGB-D implementation. L-SLAM [25] is a RGB-D SLAM system using planes and MW constraints. Note that we test ORB-SLAM2 using the open-source code provided by the author and we include the results of L-SLAM from [25] directly.…”
Section: Methodsmentioning
confidence: 99%
“…Zhou et al [24] utilize mean-shift to track dominant directions of MW and achieve drift-free rotation by decoupling the estimation of rotation and translation. Some other works [25,26,27] also exploit planes of MW to estimate drift-free rotation. These algorithms work well in some specific scenes, but they are also easy to fail because the MW assumption is not valid for some scenes.…”
Section: Related Workmentioning
confidence: 99%
“…Out of all these shapes, the major ones naturally create “obstacles” or define “edges” beyond which the vehicles cannot protrude, and these planes can be extracted and employed as features or landmarks in SLAM applications. Indeed, due to the geometric simplicity of the plane shapes and their abundance in human-inhabited environments, planar features have attracted increasing attention from both the computer graphics and robotics community in recent years [16,17,18]. As for the smaller plane shapes, they may be from the profiles of small objects or even moving objects such as vehicles.…”
Section: Introductionmentioning
confidence: 99%