This paper presents visual-inertial datasets collected on-board a micro aerial vehicle. The datasets contain synchronized stereo images, IMU measurements and accurate ground truth. The first batch of datasets facilitates the design and evaluation of visual-inertial localization algorithms on real flight data. It was collected in an industrial environment and contains millimeter accurate position ground truth from a laser tracking system. The second batch of datasets is aimed at precise 3D environment reconstruction and was recorded in a room equipped with a motion capture system. The datasets contain 6D pose ground truth and a detailed 3D scan of the environment. Eleven datasets are provided in total, ranging from slow flights under good visual conditions to dynamic flights with motion blur and poor illumination, enabling researchers to thoroughly test and evaluate their algorithms. All datasets contain raw sensor measurements, spatio-temporally aligned sensor data and ground truth, extrinsic and intrinsic calibrations and datasets for custom calibrations.
Abstract-Robust, accurate pose estimation and mapping at real-time in six dimensions is a primary need of mobile robots, in particular flying Micro Aerial Vehicles (MAVs), which still perform their impressive maneuvers mostly in controlled environments. This work presents a visual-inertial sensor unit aimed at effortless deployment on robots in order to equip them with robust real-time Simultaneous Localization and Mapping (SLAM) capabilities, and to facilitate research on this important topic at a low entry barrier.Up to four cameras are interfaced through a modern ARM-FPGA system, along with an Inertial Measurement Unit (IMU) providing high-quality rate gyro and accelerometer measurements, calibrated and hardware-synchronized with the images. This facilitates a tight fusion of visual and inertial cues that leads to a level of robustness and accuracy which is difficult to achieve with purely visual SLAM systems. In addition to raw data, the sensor head provides FPGA-pre-processed data such as visual keypoints, reducing the computational complexity of SLAM algorithms significantly and enabling employment on resource-constrained platforms.Sensor selection, hardware and firmware design, as well as intrinsic and extrinsic calibration are addressed in this work. Results from a tightly coupled reference visual-inertial SLAM framework demonstrate the capabilities of the presented system.
This paper investigates and demonstrates the potential for very long endurance autonomous aerial sensing and mapping applications with AtlantikSolar, a small-sized, hand-launchable, solar-powered fixed-wing unmanned aerial vehicle. The platform design as well as the on-board state estimation, control and pathplanning algorithms are overviewed. A versatile sensor payload integrating a multicamera sensing system, extended on-board processing and high-bandwidth communication with the ground is developed. Extensive field experiments are provided including publicly demonstrated field-trials for search-and-rescue applications and long-term mapping applications. An endurance analysis shows that AtlantikSolar can provide full-daylight operation and a minimum flight endurance of 8 hours throughout the whole year with its full multi-camera mapping payload. An open dataset with both raw and processed data is released and accompanies this paper contribution.
This paper presents a method for shared control of a vehicle. The driver commands a preferred velocity which is transformed into a collision-free local motion that respects the actuator constraints and allows for smooth and safe control. Collision-free local motions are achieved with an extension of velocity obstacles that takes into account dynamic constraints and a grid-based map representation. To limit the freedom of the driver, a global guidance trajectory can be included, which specifies the areas where the vehicle is allowed to drive in each time instance. The low computational complexity of the method makes it well suited for multi-agent settings and high update rates and both a centralized and a distributed algorithm are provided that allow for real-time control of tens of vehicles. Extensive experimental results with real robotic wheelchairs at relatively high speeds in tight scenarios are presented.
Real-time dense mapping and pose estimation is essential for a wide range of navigation tasks in mobile robotic applications. We propose an odometry and mapping system that leverages the full photometric information from a stereo-vision system as well as inertial measurements in a probabilistic framework while running in real-time on a single low-power Intel CPU core. Instead of performing mapping and localization on a set of sparse image features, we use the complete dense image intensity information in our navigation system. By incorporating a probabilistic model of the stereo sensor and the IMU, we can robustly estimate the ego-motion as well as a dense 3D model of the environment in real-time. The probabilistic formulation of the joint odometry estimation and mapping process enables to efficiently reject temporal outliers in ego-motion estimation as well as spatial outliers in the mapping process. To underline the versatility of the proposed navigation system, we evaluate it in a set of experiments on a multi-rotor system as well as on a quadrupedal walking robot. We tightly integrate our framework into the stabilization-loop of the UAV and the mapping framework of the walking robot. It is shown that the dense framework exhibits good tracking and mapping performance in terms of accuracy as well as robustness in scenarios with highly dynamic motion patterns while retaining a relatively small computational footprint. This makes it an ideal candidate for control and navigation tasks in unstructured GPS-denied environments, for a wide range of robotic platforms with power and weight constraints. The proposed framework is released as an open-source ROS package.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.