FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it also provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in flight in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s) in flight. While a vehicle is in flight in the Flight-Goggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehiclein-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. Flight-Goggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.
Recent advances in visual-inertial state estimation have allowed quadrotor aircraft to autonomously navigate in unknown environments at operational speeds. In most cases, substantially higher speeds can be achieved by actively designing motion that reduces state estimation error. We are interested in autonomous vehicles running feature-based visualinertial state estimation algorithms. In particular, we consider a trajectory optimization problem in which the goal is to maximize co-visibility of features, i.e. features are kept visible in the camera view from one keyframe to the next, increasing state estimation accuracy. Our algorithm is developed for autonomous quadrotor aircraft, for which position and yaw trajectories can be tracked separately. We assume that the desired positions of the vehicle are determined a priori, for instance, by a path planner that uses obstacles in the environment to generate a trajectory of positions with free yaw. This paper presents a novel algorithm that determines the yaw trajectory that jointly optimizes aggressiveness and feature co-visibility. The benefit of this algorithm was experimentally verified using a custom built quadrotor which uses visual inertial odometry for state estimation. The generated trajectories lead to better state estimation which contributes to improved trajectory tracking by a state-of-the-art controller under autonomous high-speed flight. Our results show that the root-mean-square error of the trajectory tracking is improved by almost 70%.
The Blackbird unmanned aerial vehicle (UAV) dataset is a large-scale, aggressive indoor flight dataset collected using a custom-built quadrotor platform for use in evaluation of agile perception. Inspired by the potential of future high-speed fully-autonomous drone racing, the Blackbird dataset contains over 10 hours of flight data from 168 flights over 17 flight trajectories and 5 environments at velocities up to 7.0 m s −1 . Each flight includes sensor data from 120 Hz stereo and downward-facing photorealistic virtual cameras, 100 Hz IMU, ∼190 Hz motor speed sensors, and 360 Hz millimeter-accurate motion capture ground truth. Camera images for each flight were photorealistically rendered using FlightGoggles [1] across a variety of environments to facilitate easy experimentation of high performance perception algorithms. The dataset is available for download at http://blackbird-dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.