This article presents the core technologies and deployment strategies of Team CERBERUS that enabled our winning run in the DARPA Subterranean Challenge finals. CERBERUS is a robotic system-of-systems involving walking and flying robots presenting resilient autonomy, as well as mapping and navigation capabilities to explore complex underground environments.
Autonomous exploration of subterranean environments constitutes a major frontier for robotic systems, as underground settings present key challenges that can render robot autonomy hard to achieve. This problem has motivated the DARPA Subterranean Challenge, where teams of robots search for objects of interest in various underground environments. In response, we present the CERBERUS system-of-systems, as a unified strategy for subterranean exploration using legged and flying robots. Our proposed approach relies on ANYmal quadraped as primary robots, exploiting their endurance and ability to traverse challenging terrain. For aerial robots, we use both conventional and collision-tolerant multirotors to explore spaces too narrow or otherwise unreachable by ground systems. Anticipating degraded sensing conditions, we developed a complementary multimodal sensor-fusion approach, utilizing camera, LiDAR, and inertial data for resilient robot pose estimation. Individual robot pose estimates are refined by a centralized multi-robot map-optimization approach to improve the reported location accuracy of detected objects of interest in the DARPA-defined coordinate frame. Furthermore, a unified exploration path-planning policy is presented to facilitate the autonomous operation of both legged and aerial robots in complex underground networks. Finally, to enable communication among team agents and the base station, CERBERUS utilizes a ground rover with a high-gain antenna and an optical fiber connection to the base station and wireless “breadcrumb” nodes deployed by the legged robots. We report results from the CERBERUS system-of-systems deployment at the DARPA Subterranean Challenge’s Tunnel and Urban Circuit events, along with the current limitations and the lessons learned for the benefit of the community.
Robust and accurate pose estimation is crucial for many applications in mobile robotics. Extending visual Simultaneous Localization and Mapping (SLAM) with other modalities such as an inertial measurement unit (IMU) can boost robustness and accuracy. However, for a tight sensor fusion, accurate time synchronization of the sensors is often crucial. Changing exposure times, internal sensor filtering, multiple clock sources and unpredictable delays from operation system scheduling and data transfer can make sensor synchronization challenging. In this paper, we present VersaVIS, an Open Versatile Multi-Camera Visual-Inertial Sensor Suite aimed to be an efficient research platform for easy deployment, integration and extension for many mobile robotic applications. VersaVIS provides a complete, open-source hardware, firmware and software bundle to perform time synchronization of multiple cameras with an IMU featuring exposure compensation, host clock translation and independent and stereo camera triggering. The sensor suite supports a wide range of cameras and IMUs to match the requirements of the application. The synchronization accuracy of the framework is evaluated on multiple experiments achieving timing accuracy of less than 1 ms. Furthermore, the applicability and versatility of the sensor suite is demonstrated in multiple applications including visual-inertial SLAM, multi-camera applications, multimodal mapping, reconstruction and object based mapping.
We propose PHASER, a correspondence-free global registration of sensor-centric pointclouds that is robust to noise, sparsity, and partial overlaps. Our method can seamlessly handle multimodal information, and does not rely on keypoint nor descriptor preprocessing modules. By exploiting properties of Fourier analysis, PHASER operates directly on the sensor's signal, fusing the spectra of multiple channels and computing the 6-DoF transformation based on correlation. Our registration pipeline starts by finding the most likely rotation r ∈ SO(3) followed by computing the most likely translation t ∈ R 3. Both estimates, r, and t are distributed according to a probability distribution that takes the underlying manifold into account, i.e., a Bingham and a Gaussian distribution, respectively. This further allows our approach to consider the periodic-nature of r and naturally represents its uncertainty. We extensively compare PHASER against several well-known registration algorithms on both simulated datasets, and real-world data acquired using different sensor configurations. Our results show that PHASER can globally align pointclouds in less than 100 ms with an average accuracy of 2 cm and 0.5 • , is resilient against noise, and can handle partial overlap.
This paper presents the perception, mapping, and planning pipeline implemented on an autonomous race car. It was developed by the 2019 AMZ driverless team for the Formula Student Germany (FSG) 2019 driverless competition, where it won 1st place overall. The presented solution combines early fusion of camera and LiDAR data, a layered mapping approach, and a planning approach that uses Bayesian filtering to achieve high-speed driving on unknown race tracks while creating accurate maps. We benchmark the method against our team's previous solution, which won FSG 2018, and show improved accuracy when driving at the same speeds. Furthermore, the new pipeline makes it possible to reliably raise the maximum driving speed in unknown environments from 3 m/s to 12 m/s while still mapping with an acceptable RMSE of 0.29 m.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.