This paper introduces ANYmal, a quadrupedal robot that features outstanding mobility and dynamic motion capability. Thanks to novel, compliant joint modules with integrated electronics, the 30 kg, 0.5 m tall robotic dog is torque controllable and very robust against impulsive loads during running or jumping. The presented machine was designed with a focus on outdoor suitability, simple maintenance, and user-friendly handling to enable future operation in real world scenarios. Performance tests with the joint actuators indicated a torque control bandwidth of more than 70 Hz, high disturbance rejection capability, as well as impact robustness when moving with maximal velocity. It is demonstrated in a series of experiments that ANYmal can execute walking gaits, dynamically trot at moderate speed, and is able to perform special maneuvers to stand up or crawl very steep stairs. Detailed measurements unveil that even full-speed running requires less than 280 W, resulting in an autonomy of more than 2 h.
With the introduction of the Microsoft Kinect for Windows v2 (Kinect v2), an exciting new sensor is available to robotics and computer vision researchers. Similar to the original Kinect, the sensor is capable of acquiring accurate depth images at high rates. This is useful for robot navigation as dense and robust maps of the environment can be created. Opposed to the original Kinect working with the structured light technology, the Kinect v2 is based on the time-of-flight measurement principle and might also be used outdoors in sunlight. In this paper, we evaluate the application of the Kinect v2 depth sensor for mobile robot navigation. The results of calibrating the intrinsic camera parameters are presented and the minimal range of the depth sensor is examined. We analyze the data quality of the measurements for indoors and outdoors in overcast and direct sunlight situations. To this end, we introduce empirically derived noise models for the Kinect v2 sensor in both axial and lateral directions. The noise models take the measurement distance, the angle of the observed surface, and the sunlight incidence angle into account. These models can be used in post-processing to filter the Kinect v2 depth images for a variety of applications.
This paper provides a system overview about ANYmal, a quadrupedal robot developed for operation in harsh environments. The 30 kg, 0.5 m tall robotic dog was built in a modular way for simple maintenance and user-friendly handling, while focusing on high mobility and dynamic motion capability. The system is tightly sealed to reach IP67 standard and protected to survive falls. Rotating lidar sensors in the front and back are used for localization and terrain mapping and compact force sensors in the feet provide accurate measurements about the contact situations. The variable payload, such as a modular pan-tilt head with a variety of inspection sensors, can be exchanged depending on the application. Thanks to novel, compliant joint modules with integrated electronics, ANYmal is precisely torque controllable and very robust against impulsive loads during running or jumping. In a series experiments we demonstrate that ANYmal can execute various climbing maneuvers, walking gaits, as well as a dynamic trot and jump. As special feature, the joints can be fully rotated to switch between X-and O-type kinematic configurations. Detailed measurements unveil a low energy consumption of 280 W during locomotion, which results in an autonomy of more than 2 h.
Mobile robots build on accurate, real-time mapping with onboard range sensors to achieve autonomous navigation over rough terrain. Existing approaches often rely on absolute localization based on tracking of external geometric or visual features. To circumvent the reliability-issues of these approaches, we propose a novel terrain mapping method which bases on proprioceptive localization from kinematic and inertial measurements only. The proposed method incorporates the drift and uncertainties of the state estimation and a noise model of the distance sensor. It yields a probabilistic terrain estimate as a grid-based elevation map including upper and lower confidence bounds. We demonstrate the effectiveness of our approach with simulated datasets and real-world experiments for real-time terrain mapping with legged robots and compare the terrain reconstruction to ground truth reference maps.
Robots working in natural, urban, and industrial settings need to be able to navigate challenging environments. In this paper, we present a motion planner for the perceptive rough-terrain locomotion with quadrupedal robots. The planner finds safe footholds along with collision-free swing-leg motions by leveraging an acquired terrain map. To this end, we present a novel pose optimization approach that enables the robot to climb over significant obstacles. We experimentally validate our approach with the quadrupedal robot ANYmal by autonomously traversing obstacles such steps, inclines, and stairs. The locomotion planner re-plans the motion at every step to cope with disturbances and dynamic environments. The robot has no prior knowledge of the scene, and all mapping, state estimation, control, and planning is performed in real-time onboard the robot.
Abstract-This paper presents a framework for planning safe and efficient paths for a legged robot in rough and unstructured terrain. The proposed approach allows to exploit the distinctive obstacle negotiation capabilities of legged robots, while keeping the complexity low enough to enable planning over considerable distances in short time. We compute typical terrain characteristics such as slope, roughness, and steps to build a traversability map. This map is used to assess the costs of individual robot footprints as a function of the robot-specific obstacle negotiating capabilities for steps, gaps and stairs. Our sampling-based planner employs the RRT* algorithm to optimize path length and safety. The planning framework has a hierarchical architecture to frequently replan the path during execution as new terrain is perceived with onboard sensors. Furthermore, a cascaded planning structure makes use of different levels of simplification to allow for fast search in simple environments, while retaining the ability to find complex solutions, such as paths through narrow passages. The proposed navigation planning framework is integrated on the quadrupedal robot StarlETH and extensively tested in simulation as well as on the real platform.
This paper addresses the local terrain mapping process for an autonomous robot. Building upon an onboard range measurement sensor and an existing robot pose estimation, we formulate a novel elevation mapping method from a robot-centric perspective. This formulation can explicitly handle drift of the robot pose estimation which occurs for many autonomous robots. Our mapping approach fully incorporates the distance sensor measurement uncertainties and the six-dimensional pose covariance of the robot. We introduce a computationally efficient formulation of the map fusion process, which allows for mapping a terrain at high update rates. Finally, our approach is demonstrated on a quadrupedal robot walking over obstacles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.