Abstract:In this paper, we study estimator inconsistency in Vision-aided Inertial Navigation Systems (VINS) from a standpoint of system observability. We postulate that a leading cause of inconsistency is the gain of spurious information along unobservable directions, resulting in smaller uncertainties, larger estimation errors, and possibly even divergence. We develop an Observability-Constrained VINS (OC-VINS), which explicitly enforces the unobservable directions of the system, hence preventing spurious information … Show more
“…The optimal A * , as shown in [9], can be determined by solving its KKT optimality condition [3], whose solution is:…”
Section: Observability-constrained Ekfmentioning
confidence: 99%
“…position and orientation (pose) of a sensing platform within GPS-denied environments, vision-aided inertial navigation is one of the most established, primarily due to its high precision and low cost. During the past decade, VINS have been successfully applied to spacecraft [20], automotive [17], and personal localization [9], demonstrating realtime performance.…”
Abstract-In order to develop Vision-aided Inertial Navigation Systems (VINS) on mobile devices, such as cell phones and tablets, one needs to consider two important issues, both due to the commercial-grade underlying hardware: (i) The unknown and varying time offset between the camera and IMU clocks (ii) The rolling-shutter effect caused by CMOS sensors. Without appropriately modelling their effect and compensating for them online, the navigation accuracy will significantly degrade. In this work, we introduce a linear-complexity algorithm for fusing inertial measurements with time-misaligned, rolling-shutter images using a highly efficient and precise linear interpolation model. As a result, our algorithm achieves a better accuracy and improved speed compared to existing methods. Finally, we validate the superiority of the proposed algorithm through simulations and real-time, online experiments on a cell phone.
“…The optimal A * , as shown in [9], can be determined by solving its KKT optimality condition [3], whose solution is:…”
Section: Observability-constrained Ekfmentioning
confidence: 99%
“…position and orientation (pose) of a sensing platform within GPS-denied environments, vision-aided inertial navigation is one of the most established, primarily due to its high precision and low cost. During the past decade, VINS have been successfully applied to spacecraft [20], automotive [17], and personal localization [9], demonstrating realtime performance.…”
Abstract-In order to develop Vision-aided Inertial Navigation Systems (VINS) on mobile devices, such as cell phones and tablets, one needs to consider two important issues, both due to the commercial-grade underlying hardware: (i) The unknown and varying time offset between the camera and IMU clocks (ii) The rolling-shutter effect caused by CMOS sensors. Without appropriately modelling their effect and compensating for them online, the navigation accuracy will significantly degrade. In this work, we introduce a linear-complexity algorithm for fusing inertial measurements with time-misaligned, rolling-shutter images using a highly efficient and precise linear interpolation model. As a result, our algorithm achieves a better accuracy and improved speed compared to existing methods. Finally, we validate the superiority of the proposed algorithm through simulations and real-time, online experiments on a cell phone.
“…In this system, we perform the observability analysis and show that while the key results of the previous observability analyses (e.g., [8,13,15,16]) are valid (the robot's global position and its orientation around the normal of the plane are unobservable), by constraining visual observations to be on a horizontal plane, the orthogonal translation of the camera with respect to the plane becomes observable. More specifically, we prove that by observing unknown feature points on a horizontal plane, the navigation system has only three unobservable directions corresponding to the global translations parallel to the plane, and the rotation around the gravity vector.…”
In this paper, we address the problem of ego-motion estimation by fusing visual and inertial information. The hardware consists of an inertial measurement unit (IMU) and a monocular camera. The camera provides visual observations in the form of features on a horizontal plane. Exploiting the geometric constraint of features on the plane into visual and inertial data, we propose a novel closed form measurement model for this system. Our first contribution in this paper is an observability analysis of the proposed planar-based visual inertial navigation system (VINS). In particular, we prove that the system has only three unobservable states corresponding to global translations parallel to the plane, and rotation around the gravity vector. Hence, compared to general VINS, an advantage of using features on the horizontal plane is that the vertical translation along the normal of the plane becomes observable. As the second contribution, we present a state-space formulation for the pose estimation in the analyzed system and solve it via a modified unscented Kalman filter (UKF). Finally, the findings of the theoretical analysis and 6-DoF motion estimation are validated by simulations as well as using experimental data.
“…The rich representation of a scene captured in an image, together with the accurate short-term estimates by gyroscopes and accelerometers present in a typical IMU have been acknowledged to complement each other, with great uses in airborne [6,20] and automotive [14] navigation. Moreover, with the availability of these sensors in most smart phones, there is great interest and research activity in effective solutions to visual-inertial SLAM.…”
Abstract-The fusion of visual and inertial cues has become popular in robotics due to the complementary nature of the two sensing modalities. While most fusion strategies to date rely on filtering schemes, the visual robotics community has recently turned to non-linear optimization approaches for tasks such as visual Simultaneous Localization And Mapping (SLAM), following the discovery that this comes with significant advantages in quality of performance and computational complexity. Following this trend, we present a novel approach to tightly integrate visual measurements with readings from an Inertial Measurement Unit (IMU) in SLAM. An IMU error term is integrated with the landmark reprojection error in a fully probabilistic manner, resulting to a joint non-linear cost function to be optimized. Employing the powerful concept of 'keyframes' we partially marginalize old states to maintain a bounded-sized optimization window, ensuring real-time operation. Comparing against both vision-only and loosely-coupled visual-inertial algorithms, our experiments confirm the benefits of tight fusion in terms of accuracy and robustness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.