Abstract:In this paper, we study estimator inconsistency in vision-aided inertial navigation systems (VINS) from the standpoint of system's observability. We postulate that a leading cause of inconsistency is the gain of spurious information along unobservable directions, which results in smaller uncertainties, larger estimation errors, and divergence. We develop an observability constrained VINS (OC-VINS), which explicitly enforces the unobservable directions of the system, hence preventing spurious information gain a… Show more
“…The presented VINS observability analyses in [8][9][10]16,19,38] are among the most recent related works, which specifically study observability properties of the INS state variables for motion estimation in unknown environments. For instance, the analyses in [8,16] result in four unobservable directions, corresponding to global translations and global rotation about the gravity vector.…”
Section: Vins Observability Analysismentioning
confidence: 99%
“…9a for the positions along the x and y axis and heading, the estimation uncertainties are decreasing at some regions. This behavior, known as estimator inconsistency, has been recently studied in [8,10,16] for a VINS. This estimator inconsistency might be explained by considering that the filter employs a linearized state-space model, where the unobservable subspace of the obtained system has a lower dimension than the unobservable subspace of the underlying nonlinear system [16].…”
In this paper, we address the problem of ego-motion estimation by fusing visual and inertial information. The hardware consists of an inertial measurement unit (IMU) and a monocular camera. The camera provides visual observations in the form of features on a horizontal plane. Exploiting the geometric constraint of features on the plane into visual and inertial data, we propose a novel closed form measurement model for this system. Our first contribution in this paper is an observability analysis of the proposed planar-based visual inertial navigation system (VINS). In particular, we prove that the system has only three unobservable states corresponding to global translations parallel to the plane, and rotation around the gravity vector. Hence, compared to general VINS, an advantage of using features on the horizontal plane is that the vertical translation along the normal of the plane becomes observable. As the second contribution, we present a state-space formulation for the pose estimation in the analyzed system and solve it via a modified unscented Kalman filter (UKF). Finally, the findings of the theoretical analysis and 6-DoF motion estimation are validated by simulations as well as using experimental data.
“…The presented VINS observability analyses in [8][9][10]16,19,38] are among the most recent related works, which specifically study observability properties of the INS state variables for motion estimation in unknown environments. For instance, the analyses in [8,16] result in four unobservable directions, corresponding to global translations and global rotation about the gravity vector.…”
Section: Vins Observability Analysismentioning
confidence: 99%
“…9a for the positions along the x and y axis and heading, the estimation uncertainties are decreasing at some regions. This behavior, known as estimator inconsistency, has been recently studied in [8,10,16] for a VINS. This estimator inconsistency might be explained by considering that the filter employs a linearized state-space model, where the unobservable subspace of the obtained system has a lower dimension than the unobservable subspace of the underlying nonlinear system [16].…”
In this paper, we address the problem of ego-motion estimation by fusing visual and inertial information. The hardware consists of an inertial measurement unit (IMU) and a monocular camera. The camera provides visual observations in the form of features on a horizontal plane. Exploiting the geometric constraint of features on the plane into visual and inertial data, we propose a novel closed form measurement model for this system. Our first contribution in this paper is an observability analysis of the proposed planar-based visual inertial navigation system (VINS). In particular, we prove that the system has only three unobservable states corresponding to global translations parallel to the plane, and rotation around the gravity vector. Hence, compared to general VINS, an advantage of using features on the horizontal plane is that the vertical translation along the normal of the plane becomes observable. As the second contribution, we present a state-space formulation for the pose estimation in the analyzed system and solve it via a modified unscented Kalman filter (UKF). Finally, the findings of the theoretical analysis and 6-DoF motion estimation are validated by simulations as well as using experimental data.
“…In this section, we will show the methodology to address this issue by employing the OC-EKF proposed in [10].…”
Section: Observability-constrained Ekfmentioning
confidence: 99%
“…This means the estimator gains spurious information along unobservable directions and becomes inconsistent. To address this problem, the OC-EKF [10] enforces (28) by modifying the state transition and measurement Jacobian matrices according to the following two observability constraints:…”
Section: Observability-constrained Ekfmentioning
confidence: 99%
“…(a) System Unobservable Directions: In [10], it is shown that the inertial navigation system aided by time-aligned global-shutter camera has four unobservable directions: one corresponding to rotations about the gravity vector, and three to a global translations. Specifically, the system's unobservable directions with respect to the IMU pose and feature position,…”
Abstract-In order to develop Vision-aided Inertial Navigation Systems (VINS) on mobile devices, such as cell phones and tablets, one needs to consider two important issues, both due to the commercial-grade underlying hardware: (i) The unknown and varying time offset between the camera and IMU clocks (ii) The rolling-shutter effect caused by CMOS sensors. Without appropriately modelling their effect and compensating for them online, the navigation accuracy will significantly degrade. In this work, we introduce a linear-complexity algorithm for fusing inertial measurements with time-misaligned, rolling-shutter images using a highly efficient and precise linear interpolation model. As a result, our algorithm achieves a better accuracy and improved speed compared to existing methods. Finally, we validate the superiority of the proposed algorithm through simulations and real-time, online experiments on a cell phone.
Robot localization is the process of determining where a mobile robot is located with respect to its environment. Localization is one of the most fundamental competencies required by an autonomous robot as the knowledge of the robot's own location is an essential precursor to making decisions about future actions. In a typical robot localization scenario, a map of the environment is available and the robot is equipped with sensors that observe the environment as well as monitor its own motion. The localization problem then becomes one of estimating the robot position and orientation within the map using information gathered from these sensors. Robot localization techniques need to be able to deal with noisy observations and generate not only an estimate of the robot location but also a measure of the uncertainty of the location estimate. This article provides an introduction to estimation of theoretic solutions to the robot localization problem. It begins by discussing the mathematical models used to describe the robot motion and observations from the sensors. Two of the most common probabilistic techniques, the extended Kalman filter and the particle filter, that can be used to combine information from sensors to compute an estimate of the robot location are then discussed in detail and illustrated by simple examples. A brief summary of the large body of literature on robot localization is presented next. Appendices that present the essential mathematical background and alternative techniques are provided. The MATLAB code of the localization algorithms is also available.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.