The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2020
DOI: 10.3390/s20051511
|View full text |Cite
|
Sign up to set email alerts
|

SD-VIS: A Fast and Accurate Semi-Direct Monocular Visual-Inertial Simultaneous Localization and Mapping (SLAM)

Abstract: In practical applications, how to achieve a perfect balance between high accuracy and computational efficiency can be the main challenge for simultaneous localization and mapping (SLAM). To solve this challenge, we propose SD-VIS, a novel fast and accurate semi-direct visual-inertial SLAM framework, which can estimate camera motion and structure of surrounding sparse scenes. In the initialization procedure, we align the pre-integrated IMU measurements and visual images and calibrate out the metric scale, initi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 30 publications
0
3
0
Order By: Relevance
“…Traditional visual SLAM can be divided into two classes: feature-based and direct methods. Feature-based methods extract salient image features in each image, match them in successive frames using invariant feature descriptors, robustly recover camera poses and structure using epipolar geometry, and refine poses and structure by minimizing projection errors [4]. Despite the good performances in the past several years, these feature-based approaches are still very sensitive to noise and outliers, time-consuming during the process of feature extraction and matching.…”
Section: Introductionmentioning
confidence: 99%
“…Traditional visual SLAM can be divided into two classes: feature-based and direct methods. Feature-based methods extract salient image features in each image, match them in successive frames using invariant feature descriptors, robustly recover camera poses and structure using epipolar geometry, and refine poses and structure by minimizing projection errors [4]. Despite the good performances in the past several years, these feature-based approaches are still very sensitive to noise and outliers, time-consuming during the process of feature extraction and matching.…”
Section: Introductionmentioning
confidence: 99%
“…For SLAM technology, various systems or platforms have been introduced, such as the Lidar system [5], stereo camera [6] and RGBD-camera [7]. Some technologies based on SLAM can contribute to the improvement of mapping accuracy, such as a Pseudo-GNSS/INS module integrated framework with probabilistic SLAM [8], a 2D SLAM system using low-cost Kinect Sensor [9], prediction-based SLAM (P-SLAM) [10], graph-based hierarchical SLAM framework [11], semi-direct visual-inertial SLAM framework [12], and a CPU-only pipeline for SLAM [13]. Similar to traditional data fusion technology [14], SLAM with data fusion technologies has also been developed accordingly, such as a fusion of the RGB image and Lidar point cloud [15][16][17].…”
Section: Introductionmentioning
confidence: 99%
“…Thus, planar structure recognition, which can be formulated as the plane detection problem, has become an important research topic in computer vision for decades. The detected planes, which can be regarded as the abstracted form of an actual scene, contain a lot of high-level structure information and they can benefit many other semantic analysis tasks, like object detection [ 1 ], self-navigation [ 2 ], scene segmentation [ 3 ], SLAM [ 4 , 5 ], robot self-localization [ 6 , 7 , 8 ], For instance, the robot can better map the current environment with the plane detection result, which significantly reduces the uncertainty in the mapping results and improves the accuracy of positioning.…”
Section: Introductionmentioning
confidence: 99%