The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.1109/access.2021.3053188
|View full text |Cite
|
Sign up to set email alerts
|

Survey and Evaluation of RGB-D SLAM

Abstract: The traditional visual SLAM systems take the monocular or stereo camera as input sensor, with complex map initialization and map point triangulation steps needed for 3D map reconstruction, which are easy to fail, computationally complex and can cause noisy measurements. The emergence of RGB-D camera which provides RGB image together with depth information breaks this situation. While a number of RGB-D SLAM systems have been proposed in recent years, the current classification research on RGB-D SLAM is very lac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 47 publications
(20 citation statements)
references
References 63 publications
(78 reference statements)
0
16
0
Order By: Relevance
“…At present, for SLAM algorithms, RGB-D cameras have been introduced and three-dimensional maps can be created in real-time, and a variety of different RGB-D SLAM algorithms have been proposed. Most of these RGB-D SLAM are used for indoor localization and object dense reconstruction [9]. Mono SLAM [10] and ORB-SLAM2 [11] based on the feature point method can directly obtain the camera's pose in space and the sparse point cloud map, but the obtained map cannot be directly used for navigation.…”
Section: Slam (Simultaneous Localization and Mappingmentioning
confidence: 99%
“…At present, for SLAM algorithms, RGB-D cameras have been introduced and three-dimensional maps can be created in real-time, and a variety of different RGB-D SLAM algorithms have been proposed. Most of these RGB-D SLAM are used for indoor localization and object dense reconstruction [9]. Mono SLAM [10] and ORB-SLAM2 [11] based on the feature point method can directly obtain the camera's pose in space and the sparse point cloud map, but the obtained map cannot be directly used for navigation.…”
Section: Slam (Simultaneous Localization and Mappingmentioning
confidence: 99%
“…Moreover, the learning based methods are mostly trained and tested on data belonging the same domain, e.g. LiDAR data in outdoors, which cannot generalise well to other domains without retraining or finetuning, such as sparser point clouds reconstructed with vision-based SLAM systems [36]. Differently, our approach exploits advanced deep local 3D descriptors that are trained with point clouds extracted from RGBD sensors, estimates the 6DoF transformation between a pair of point clouds, and measures the overlap, serving for the loop closure detection task with little domain gap.…”
Section: Related Workmentioning
confidence: 99%
“…Zhang et al [11] give an overview of current RGB-D SLAM algorithms. The typical method for localizing the sensor pose in the TSDF is frame-to-model geometric ICP [1], where a back-projected point cloud from the previous position is used with the point-to-plane metric and Gauss Newton for minimizing the registration error.…”
Section: Related Workmentioning
confidence: 99%