In this paper we present a large dataset with a variety of mobile mapping sensors collected using a handheld device carried at typical walking speeds for nearly 2.2 km through New College, Oxford. The dataset includes data from two commercially available devices -a stereoscopic-inertial camera and a multi-beam 3D LiDAR, which also provides inertial measurements. Additionally, we used a tripod-mounted survey grade LiDAR scanner to capture a detailed millimeteraccurate 3D map of the test location (containing ∼290 million points). Using the map we inferred centimeter-accurate 6 Degree of Freedom (DoF) ground truth for the position of the device for each LiDAR scan to enable better evaluation of LiDAR and vision localisation, mapping and reconstruction systems. This ground truth is the particular novel contribution of this dataset and we believe that it will enable systematic evaluation which many similar datasets have lacked. The dataset combines both built environments, open spaces and vegetated areas so as to test localization and mapping systems such as vision-based navigation, visual and LiDAR SLAM, 3D LIDAR reconstruction and appearance-based place recognition. The dataset is available at: ori.ox.ac.uk/datasets/newer-college-dataset
Legged robots, specifically quadrupeds, are becoming increasingly attractive for industrial applications such as inspection. However, to leave the laboratory and to become useful to an end user requires reliability in harsh conditions. From the perspective of state estimation, it is essential to be able to accurately estimate the robot's state despite challenges such as uneven or slippery terrain, textureless and reflective scenes, as well as dynamic camera occlusions. We are motivated to reduce the dependency on foot contact classifications, which fail when slipping, and to reduce position drift during dynamic motions such as trotting. To this end, we present a factor graph optimization method for state estimation which tightly fuses and smooths inertial navigation, leg odometry and visual odometry. The effectiveness of the approach is demonstrated using the ANYmal quadruped robot navigating in a realistic outdoor industrial environment. This experiment included trotting, walking, crossing obstacles and ascending a staircase. The proposed approach decreased the relative position error by up to 55% and absolute position error by 76% compared to kinematicinertial odometry.
Autonomous exploration of subterranean environments constitutes a major frontier for robotic systems, as underground settings present key challenges that can render robot autonomy hard to achieve. This problem has motivated the DARPA Subterranean Challenge, where teams of robots search for objects of interest in various underground environments. In response, we present the CERBERUS system-of-systems, as a unified strategy for subterranean exploration using legged and flying robots. Our proposed approach relies on ANYmal quadraped as primary robots, exploiting their endurance and ability to traverse challenging terrain. For aerial robots, we use both conventional and collision-tolerant multirotors to explore spaces too narrow or otherwise unreachable by ground systems. Anticipating degraded sensing conditions, we developed a complementary multimodal sensor-fusion approach, utilizing camera, LiDAR, and inertial data for resilient robot pose estimation. Individual robot pose estimates are refined by a centralized multi-robot map-optimization approach to improve the reported location accuracy of detected objects of interest in the DARPA-defined coordinate frame. Furthermore, a unified exploration path-planning policy is presented to facilitate the autonomous operation of both legged and aerial robots in complex underground networks. Finally, to enable communication among team agents and the base station, CERBERUS utilizes a ground rover with a high-gain antenna and an optical fiber connection to the base station and wireless “breadcrumb” nodes deployed by the legged robots. We report results from the CERBERUS system-of-systems deployment at the DARPA Subterranean Challenge’s Tunnel and Urban Circuit events, along with the current limitations and the lessons learned for the benefit of the community.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.