2009
DOI: 10.1007/978-3-642-00196-3_59
|View full text |Cite
|
Sign up to set email alerts
|

Fast Relative Pose Calibration for Visual and Inertial Sensors

Abstract: Accurate vision-aided inertial navigation depends on proper calibration of the relative pose of the camera and the inertial measurement unit (IMU). Calibration errors introduce bias in the overall motion estimate, degrading navigation performance -sometimes dramatically. However, existing camera-IMU calibration techniques are difficult, time-consuming and often require additional complex apparatus. In this paper, we formulate the camera-IMU relative pose calibration problem in a filtering framework, and propos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2009
2009
2017
2017

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 36 publications
(22 citation statements)
references
References 10 publications
0
22
0
Order By: Relevance
“…Because hundreds of features are tracked, and lost, at each time instant, the complexity of the map grows rapidly, and therefore visual features must be organized efficiently in order to enable rapid localization. Many have addressed this issue in the simultaneous localization and mapping (SLAM) community, for instance (Bosse et al, 2004, Eade and Drummond, 2007a, Guivant and Nebot, 2001, Klein and Murray, 2007, Konolige and Agrawal, 2008, Mouragnon et al, 2006, Nebot and Durrant-Whyte, 1999, Kelly and Sukhatme, 2009, Chum et al, 2009 just to mention a few. In section 5.2 we describe our own topological representation that is based on the notion of "locations" defined by co-visibility.…”
Section: Map Building and Localizationmentioning
confidence: 99%
“…Because hundreds of features are tracked, and lost, at each time instant, the complexity of the map grows rapidly, and therefore visual features must be organized efficiently in order to enable rapid localization. Many have addressed this issue in the simultaneous localization and mapping (SLAM) community, for instance (Bosse et al, 2004, Eade and Drummond, 2007a, Guivant and Nebot, 2001, Klein and Murray, 2007, Konolige and Agrawal, 2008, Mouragnon et al, 2006, Nebot and Durrant-Whyte, 1999, Kelly and Sukhatme, 2009, Chum et al, 2009 just to mention a few. In section 5.2 we describe our own topological representation that is based on the notion of "locations" defined by co-visibility.…”
Section: Map Building and Localizationmentioning
confidence: 99%
“…Apart from requiring fewer parameters when fusing measurements of significantly different rates such as images and inertial data, this approach allows for an accurate estimation of the fixed time delay between camera and IMU. Like other frameworks [9], [10], the calibration procedure requires waving the setup in front of a checkerboard, while exciting all rotational degrees of freedom sufficiently in order to render the displacement of camera and IMU well observable. We also experimented with incorporating the calibration for the stereo extrinsics directly into the unified calibration framework, but observed degraded performance when used in visual-inertial SLAM, an explanation to which may be that the setups between calibration and SLAM vary (mostly as far as scene depth is concerned).…”
Section: Calibrationmentioning
confidence: 99%
“…Kelly and Sukhatme present an UKF based calibration algorithm which can be based on artificial or natural landmarks [8], [4]. The idea is to allow a robot to simultaneously explore the environment while calibrating its sensors.…”
Section: Related Workmentioning
confidence: 99%