2022
DOI: 10.1109/lra.2022.3142900
|View full text |Cite
|
Sign up to set email alerts
|

MSC-VO: Exploiting Manhattan and Structural Constraints for Visual Odometry

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 20 publications
(18 citation statements)
references
References 21 publications
0
18
0
Order By: Relevance
“…Liu et al (2022) proposed a perspective transformation model for camera patches that can be directly incorporated into the error function of the direct method or feature point method, enhancing algorithm accuracy. Company-Corcoles et al (2022) introduced new spatial constraints to object-level SLAM, including proportional constraints, symmetric texture constraints and plane support constraints and verified the positive effect of this approach on enhancing SLAM system performance using public data sets. In low-light environments, long exposures result in motion blur, while short exposures introduce image noise.…”
Section: Some Of the Latest Research In Visual Simultaneous Localizat...mentioning
confidence: 84%
See 2 more Smart Citations
“…Liu et al (2022) proposed a perspective transformation model for camera patches that can be directly incorporated into the error function of the direct method or feature point method, enhancing algorithm accuracy. Company-Corcoles et al (2022) introduced new spatial constraints to object-level SLAM, including proportional constraints, symmetric texture constraints and plane support constraints and verified the positive effect of this approach on enhancing SLAM system performance using public data sets. In low-light environments, long exposures result in motion blur, while short exposures introduce image noise.…”
Section: Some Of the Latest Research In Visual Simultaneous Localizat...mentioning
confidence: 84%
“…Indirect methods use feature point extraction and matching (such as corners and edges) for localization and mapping (Campos et al , 2021; Company-Corcoles et al , 2022; Mur-Artal et al , 2015; Mur-Artal and Tardós, 2017). They first detect and extract distinctive feature points in the images and then estimate camera motion and scene depth by matching these feature points.…”
Section: Standard Procedures For Classic Simultaneous Localization An...mentioning
confidence: 99%
See 1 more Smart Citation
“…Li et al [21] proposed a framework which can robustly utilize the Manhattan world structure and a method to detect Manhattan Frames (MF) directly from planes, allowing the system to model the scene as a mixture of Manhattan Frames. Company-Corcoles et al [22], after combining points, lines, and the Manhattan axis of the scene, calculated the reprojection error of points and lines to optimize the camera pose through local map optimization. Li et al [23] proposed a decoupling-refinement method based on points, lines, and planes, as well as the use of Manhattan relationships in an additional pose refinement module.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, complementary sensors are fused to improve the stability of the algorithm, such as the fusion of cameras and inertial measurement sensors (IMU) to obtain visual-inertial odometry (VIO) [2] .In the current VIO framework, nonlinear optimization and Kalman filtering are two mainstream information fusion frameworks. The mileage calculation method represented by nonlinear optimization is VINS [3] , MSC_VO [4] etc. The representative based on Kalman filter implementation is ROVIO [5] and MSCKF_VIO [6] etc.…”
Section: Introductionmentioning
confidence: 99%