The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2007
DOI: 10.1109/robot.2007.364024
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation

Abstract: In this paper, we present an Extended Kalman Filter (EKF)-based algorithm for real-time vision-aided inertial navigation. The primary contribution of this work is the derivation of a measurement model that is able to express the geometric constraints that arise when a static feature is observed from multiple camera poses. This measurement model does not require including the 3D feature position in the state vector of the EKF and is optimal, up to linearization errors. The vision-aided inertial navigation algor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
1,129
1
2

Year Published

2013
2013
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 1,376 publications
(1,134 citation statements)
references
References 23 publications
2
1,129
1
2
Order By: Relevance
“…However, we do not store OFs in the map. Instead, all OFs are processed and marginalized on-the-fly using the MSC-KF approach [25] (see Sect. 3.2).…”
Section: System State and Propagation Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…However, we do not store OFs in the map. Instead, all OFs are processed and marginalized on-the-fly using the MSC-KF approach [25] (see Sect. 3.2).…”
Section: System State and Propagation Modelmentioning
confidence: 99%
“…Finally, for OFs, we employ the MSC-KF approach [25] to impose a pose update constraining all the views from which the feature was seen. To accomplish this, we utilize stochastic cloning [29] over a window of m camera poses.…”
Section: Measurement Update Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Visual-inertial odometry with a translational error in the range of 1% of the distance traveled (similar to [29]) was used to construct the maps. Keypoints were detected using a Difference of Gaussians (DoG) keypoint detector.…”
Section: Methodsmentioning
confidence: 99%
“…The rich representation of a scene captured in an image, together with the accurate short-term estimates by gyroscopes and accelerometers present in a typical IMU have been acknowledged to complement each other, with great uses in airborne [6,20] and automotive [14] navigation. Moreover, with the availability of these sensors in most smart phones, there is great interest and research activity in effective solutions to visual-inertial SLAM.…”
Section: Introductionmentioning
confidence: 99%