Legged robots have the ability to adapt their walking posture to navigate confined spaces due to their high degrees of freedom. However, this has not been exploited in most common multilegged platforms. This paper presents a deformable bounding box abstraction of the robot model, with accompanying mapping and planning strategies, that enable a legged robot to autonomously change its body shape to navigate confined spaces. The mapping is achieved using robot-centric multielevation maps generated with distance sensors carried by the robot. The path planning is based on the trajectory optimisation algorithm CHOMP which creates smooth trajectories while avoiding obstacles. The proposed method has been tested in simulation and implemented on the hexapod robot Weaver, which is 33 cm tall and 82 cm wide when walking normally. We demonstrate navigating under 25 cm overhanging obstacles, through 70 cm wide gaps and over 22 cm high obstacles in both artificial testing spaces and realistic environments, including a subterranean mining tunnel.
The regular inspection of sewer systems is essential to assess the level of degradation and to plan maintenance work. Currently, human inspectors must walk through sewers and use their sense of touch to inspect the roughness of the floor and check for cracks. The sense of touch is used since the floor is often covered by (waste) water and biofilm, which renders visual inspection very challenging. In this paper, we demonstrate a robotic inspection system which evaluates concrete deterioration using tactile interaction. We deployed the quadruped robot ANYmal in the sewers of Zurich and commanded it using shared autonomy for several such missions. The inspection itself is realized via a well-defined scratching motion using one of the limbs on the sewer floor. Inertial and force/torque sensors embedded within specially designed feet captured the resulting vibrations. A pretrained support vector machine (SVM) is evaluated to assess the state of the concrete. The results of the classification are then displayed in a three-dimensional map recorded by the robot for easy visualization and assessment. To train the SVM we recorded 625 samples with ground truth labels provided by professional sewer inspectors. We make this data set publicly available. We achieved deterioration level estimates within three classes of more than 92% accuracy. During the four deployment missions, we covered a total distance of 300 m and acquired 130 inspection samples.
Legged robots are exceedingly versatile and have the potential to navigate complex, confined spaces due to their many degrees of freedom. As a result of the computational complexity, there exist no online planners for perceptive whole‐body locomotion of robots in tight spaces. In this paper, we present a new method for perceptive planning for multilegged robots, which generates body poses, footholds, and swing trajectories for collision avoidance. Measurements from an onboard depth camera are used to create a three‐dimensional map of the terrain around the robot. We randomly sample body poses then smooth the resulting trajectory while satisfying several constraints, such as robot kinematics and collision avoidance. Footholds and swing trajectories are computed based on the terrain, and the robot body pose is optimized to ensure stable locomotion while not colliding with the environment. Our method is designed to run online on a real robot and generate trajectories several meters long. We first tested our algorithm in several simulations with varied confined spaces using the quadrupedal robot ANYmal. We also simulated experiments with the hexapod robot Weaver to demonstrate applicability to different legged robot configurations. Then, we demonstrated our whole‐body planner in several online experiments both indoors and in realistic scenarios at an emergency rescue training facility. ANYmal, which has a nominal standing height of 80 cm and a width of 59 cm, navigated through several representative disaster areas with openings as small as 60 cm. Three‐meter trajectories were replanned with 500 ms update times.
Legged robot navigation in extreme environments can hinder the use of cameras and lidar due to darkness, air obfuscation or sensor damage, whereas proprioceptive sensing will continue to work reliably. In this paper, we propose a purely proprioceptive localization algorithm which fuses information from both geometry and terrain type to localize a legged robot within a prior map. First, a terrain classifier computes the probability that a foot has stepped on a particular terrain class from sensed foot forces. Then, a Monte Carlo-based estimator fuses this terrain probability with the geometric information of the foot contact points. Results demonstrate this approach operating online and onboard an ANYmal B300 quadruped robot traversing several terrain courses with different geometries and terrain types over more than 1.2 km. The method keeps pose estimation error below 20 cm using a prior map with trained network and using sensing only from the feet, leg joints and IMU.
Continuous robot operation in extreme scenarios such as underground mines or sewers is difficult because exteroceptive sensors may fail due to fog, darkness, dirt or malfunction. So as to enable autonomous navigation in these kinds of situations, we have developed a type of proprioceptive localization which exploits the foot contacts made by a quadruped robot to localize against a prior map of an environment, without the help of any camera or LIDAR sensor. The proposed method enables the robot to accurately re-localize itself after making a sequence of contact events over a terrain feature. The method is based on Sequential Monte Carlo and can support both 2.5D and 3D prior map representations. We have tested the approach online and onboard the ANYmal quadruped robot in two different scenarios: the traversal of a custom built wooden terrain course and a wall probing and following task. In both scenarios, the robot is able to effectively achieve a localization match and to execute a desired preplanned path. The method keeps the localization error down to 10 cm on feature rich terrain by only using its feet, kinematic and inertial sensing.
Visual Inertial Odometry (VIO) is one of the most established state estimation methods for mobile platforms. However, when visual tracking fails, VIO algorithms quickly diverge due to rapid error accumulation during inertial data integration. This error is typically modeled as a combination of additive Gaussian noise and a slowly changing bias which evolves as a random walk. In this work, we propose to train a neural network to learn the true bias evolution. We implement and compare two common sequential deep learning architectures: LSTMs and Transformers. Our approach follows from recent learning-based inertial estimators, but, instead of learning a motion model, we target IMU bias explicitly, which allows us to generalize to locomotion patterns unseen in training. We show that our proposed method improves state estimation in visually challenging situations across a wide range of motions by quadrupedal robots, walking humans, and drones. Our experiments show an average 15% reduction in drift rate, with much larger reductions when there is total vision failure. Importantly, we also demonstrate that models trained with one locomotion pattern (human walking) can be applied to another (quadruped robot trotting) without retraining.
Recent studies have shown that haptic sensing can be used effectively for legged robot localization in extreme scenarios where vision sensors might fail, such as mines and sewers. However, existing methods use supervised classification, with training and evaluation executed over explicit terrain classes. This is a significant limitation in real-world applications, where prior labeling and handcrafted classes are often impractical. In this paper, we propose a novel haptic localization system based on a fully unsupervised terrain representation learned solely from the force/torque sensors located at the quadruped robot's feet. Instead of using the detected terrain class for localization, we propose an improved autoencoder architecture to generate a sparse map on the first run and to localize against the sparse map during subsequent runs. We compare our approach to a haptic localization system based on supervised terrain classification, showing that the unsupervised method has comparable or better performance than the supervised one for the same trajectories while clearly outperforming the proprioceptive odometry estimator available on the robot. The proposed approach is therefore well-suited for a routine maintenance application, increasing the robustness of the platform.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.