2020
DOI: 10.1109/lra.2020.3010457
|View full text |Cite
|
Sign up to set email alerts
|

ROVINS: Robust Omnidirectional Visual Inertial Navigation System

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…The pose is estimated with a proposed multi-view P3P RANSAC-based algorithm and the online camera calibration is proposed in optimization to overcome the deformation and motion of the camera. They further extended their work to omnidirectional visual-inertial navigation to overcome the shortcomings that affect visual sensors, such as fast motion and sudden illumination changes [29]. The soft relative pose constraints from the inertial sensor are added to pose optimization to deal with blind motion estimation, and the visual features in tracking are initialized based on estimated velocity from prediction results.…”
Section: Related Workmentioning
confidence: 99%
“…The pose is estimated with a proposed multi-view P3P RANSAC-based algorithm and the online camera calibration is proposed in optimization to overcome the deformation and motion of the camera. They further extended their work to omnidirectional visual-inertial navigation to overcome the shortcomings that affect visual sensors, such as fast motion and sudden illumination changes [29]. The soft relative pose constraints from the inertial sensor are added to pose optimization to deal with blind motion estimation, and the visual features in tracking are initialized based on estimated velocity from prediction results.…”
Section: Related Workmentioning
confidence: 99%
“…A unique omnidirectional setup was proposed by Seok et al [5] with four very large FoV cameras. The four overlapping image regions were treated as four stereo cameras, but the system did not fully take advantage of the camera setup to track features across the camera pairs.…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, if the cameras are arranged to have overlapping Fields of View (FoV), a particular feature could potentially be tracked across the cameras for as long as it is visible in any camera. Multicamera solutions have been presented in the past [4], [5], [6], however, no proposal has studied cross camera feature tracking.…”
Section: Introductionmentioning
confidence: 99%
“…Vision-based inertial odometry (VIO) is a technique that combines cameras and inertial measurement units (IMUs) to enhance motion tracking performance [1,2]. Cameras makes it adaptable to various complex environments and scenarios.…”
Section: Introductionmentioning
confidence: 99%