In recent years, microelectromechanical system (MEMS) inertial sensors (3D accelerometers and 3D gyroscopes) have become widely available due to their small size and low cost. Inertial sensor measurements are obtained at high sampling rates and can be integrated to obtain position and orientation information. These estimates are accurate on a short time scale, but suffer from integration drift over longer time scales. To overcome this issue, inertial sensors are typically combined with additional sensors and models. In this tutorial we focus on the signal processing aspects of position and orientation estimation using inertial sensors. We discuss different modeling choices and a selected number of important algorithms. The algorithms include optimization-based smoothing and filtering as well as computationally cheaper extended Kalman filter and complementary filter implementations. The quality of their estimates is illustrated using both experimental and simulated data.
In this paper a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms with respect to their resampling quality and computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in terms of resampling quality and computational complexity.
An optimization-based approach to human body motion capture using inertial sensors, 2014, In Proceedings of the 19th IFAC World Congress, 2014, 79-85. ISBN: 978-3-902823-62- Abstract: In inertial human motion capture, a multitude of body segments are equipped with inertial measurement units, consisting of 3D accelerometers, 3D gyroscopes and 3D magnetometers. Relative position and orientation estimates can be obtained using the inertial data together with a biomechanical model. In this work we present an optimization-based solution to magnetometer-free inertial motion capture. It allows for natural inclusion of biomechanical constraints, for handling of nonlinearities and for using all data in obtaining an estimate. As a proof-of-concept we apply our algorithm to a lower body configuration, illustrating that the estimates are drift-free and match the joint angles from an optical reference system.
Abstract-In this work we present an approach to combine measurements from inertial sensors (accelerometers and gyroscopes) with time of arrival measurements from an ultrawideband system for indoor positioning. Our algorithm uses a tightlycoupled sensor fusion approach, where we formulate the problem as a maximum a posteriori problem that is solved using an optimization approach. It is shown to lead to accurate 6D position and orientation estimates when compared to reference data from an independent optical tracking system. To be able to obtain position information from the ultrawideband measurements, it is imperative that accurate estimates of the ultrawideband receivers' positions and their clock offsets are available. Hence, we also present an easy-to-use algorithm to calibrate the ultrawideband system using a maximum likelihood formulation. Throughout this work, the ultrawideband measurements are modeled by a tailored heavy-tailed asymmetric distribution to account for measurement outliers. The heavy-tailed asymmetric distribution works well on experimental data, as shown by analyzing the position estimates obtained using the ultrawideband measurements via a novel multilateration approach.
Abstract-In this paper we propose a 6DOF tracking system combining Ultra-Wideband measurements with low-cost MEMS inertial measurements. A tightly coupled system is developed which estimates position as well as orientation of the sensorunit while being reliable in case of multipath effects and NLOS conditions. The experimental results show robust and continuous tracking in a realistic indoor positioning scenario.
This paper is concerned with the problem of estimating the relative translation and orientation of an inertial measurement unit and a camera, which are rigidly connected. The key is to realize that this problem is in fact an instance of a standard problem within the area of system identification, referred to as a gray-box problem. We propose a new algorithm for estimating the relative translation and orientation, which does not require any additional hardware, except a piece of paper with a checkerboard pattern on it. The method is based on a physical model which can also be used in solving, for example, sensor fusion problems. The experimental results show that the method works well in practice, both for perspective and spherical cameras
The problem of estimating and predicting position and orientation (pose) of a camera is approached by fusing measurements from inertial sensors (accelerometers and rate gyroscopes) and vision. The sensor fusion approach described in this contribution is based on nonlinear filtering of these complementary sensors. This way, accurate and robust pose estimates are available for the primary purpose of augmented reality applications, but with the secondary effect of reducing computation time and improving the performance in vision processing. A realtime implementation of a multi-rate extended Kalman filter is described, using a dynamic model with 22 states, where 12.5 Hz correspondences from vision and 100 Hz inertial measurements are processed. An example where an industrial robot is used to move the sensor unit is presented. The advantage with this configuration is that it provides ground truth for the pose, allowing for objective performance evaluation. The results show that we obtain an absolute accuracy of 2 cm in position and 1°in orientation.
Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid "continuous stop-and-go" mobile mapping system called Scannect. A 3D terrestrial LiDAR system is integrated with a MEMS IMU and two Microsoft Kinect sensors to map indoor urban environments. The Kinects' depth maps were processed using a new point-to-plane ICP that minimizes the reprojection error of the infrared camera and projector pair in an implicit iterative extended Kalman filter (IEKF). A new formulation of the 5-point visual odometry method is tightly coupled in the implicit IEKF without increasing the dimensions of the state space. The Scannect can map and navigate in areas with textureless walls and provides an effective means for mapping large areas with lots of occlusions. Mapping long corridors (total travel distance of 120 m) took approximately 30 minutes and achieved a Mean Radial Spherical Error of 17 cm before smoothing or global optimization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.