In order to achieve real autonomy, robots have to be able to navigate in completely unknown environments. Due to the complexity of computer vision algorithms, almost every approach for robotic navigation is either based on previous knowledge of the environment, such as markers or as resulting from learning methods, or makes strong simplifying assumptions about it (height-map representations, static scenarios). While showing impressive success in certain applications, these approaches limit the potential of legged robots to achieve the amazing flexibility of humans in more complex environments. In this work, we present a strategy for full 3D vision processing that is able to handle changing, dynamic environments. These are modeled using 3D geometries that are processed in real-time by the motion planner of our biped robot Lola for avoiding moving obstacles and walking over platforms. In order to allow for a more intuitive development of such systems in the future, we present tools for visualization including two mixed reality applications using both an external camera and Microsoft’s HoloLens. We validate our system in simulations and experiments with our full-size humanoid robot Lola and publish our framework open source for the benefit of the community.
This paper presents our newest findings in planning a dynamically and kinematically feasible center of mass motion for bipedal walking robots. We use a simplified robot model to incorporate multi-body dynamics and kinematic limits, while still being able to meet hard real-time requirements. The vertical center of mass motion is obtained through interpolation of a quintic spline whose control points are projected onto the kinematically feasible region. Subsequently, the horizontal motion is computed from multi-body dynamics which we approximate by solving an overdetermined boundary value problem via spline collocation based on quintic polynomials. The proposed algorithm is an improvement of our previous method, which used a parametric torso height optimization for vertical and cubic spline collocation for horizontal components. The novel center of mass motion improves stability, especially for stepping up and down platforms. Moreover, the new method leads to a less complex overall algorithm since it removes the necessity of manually tuned parameters and strongly simplifies the incorporation of boundary values. Lastly, the new approach is more efficient, which leads to a significantly reduced total runtime. The proposed method is validated through successfully conducted simulations and experiments on our humanoid robot platform, LOLA.
Bipedal robots can be better alternatives to other robots in certain applications, but their full potential can only be used if their entire kinematic range is cleverly exploited. Generating motions that are not only dynamically feasible but also take into account the kinematic limits as well as collisions in real time is one of the main challenges towards that goal. We present an approach to generate adaptable torso height trajectories to exploit the full kinematic range in bipedal locomotion. A simplified 2D model approximates the robot's full kinematic model for multiple steps ahead. It is used to optimize the torso height trajectories while taking future motion kinematics into account. The method significantly improves the robot's motion not only while walking in uneven terrain, but also during normal walking. Furthermore, we integrated the method in our framework for autonomous walking and we validated its real-time character in successfully conducted experiments.
Traversing uneven terrain with unexpected changes in ground height still poses a major challenge to walking stabilization of humanoid robots. A common approach to balance a biped in such situations is the control of the ground reaction forces at the feet. However, existing solutions for this direct force control scheme do not allow to integrate changing contact areas. Therefore, we propose an explicit formulation for the contact model in task-space. Furthermore, the dynamics of the center of mass is not considered in existing force control approaches. In this work, we present a method to realize contact forces by accelerating the center of mass within the force controller. We show the validity of our explicit contact model in simulation and real-world experiments with our humanoid robot LOLA. The integration of center of mass dynamics shows great reduction of upper-body inclination angles for a late contact experiment with 5.5 cm change in ground height. We consider our contact model as a starting point for future integration of sensor-based contact information.
The design of humanoid robots naturally requires the simultaneous control of a high number of joints. Moreover, the performance of the overall robot is strongly determined by the low-level control system as all high-level software e.g. for locomotion planning and control is built on top of it. In order to achieve high update rates and high bandwidth for the joint control, an advanced real-time control system architecture is required. However, outdated communication protocols with associated limits in the achievable update rates are still used in nowadays humanoid robots. Moreover, the performance of the low-level control systems is not analyzed in detail or the systems rely on specialized hardware, which lacks reliability and persistence. We present a reliable and high-performance control system architecture for humanoid robots based on the ETHERCAT technology. To the authors' knowledge this is the only system, which operates at control rates beyond 2 khz and input/output latencies below 1 ms. Furthermore, we present a novel learning-based feedforward control strategy to improve joint tracking performance. This improved joint control method and the communication system are evaluated on our humanoid robot LOLA. Our software framework is available online to allow other researchers to benefit from our experiences.
Classic biped walking controllers assume a perfectly flat, rigid surface on which the robot walks. While walking over unknown terrain, robots need to sense and estimate the ground location. Errors in this estimation result in an unexpected early or late ground contact of the swing foot. In this paper, we analyze how these errors affect walking stability. Based on simulation results, we propose a strategy that mitigates this effect. We show that if the ground height has an associated uncertainty, an overestimation of its value results in a more stable walk. This overestimation depends on both sensor data and the robot's dynamics. By using a reduced robot model, our strategy could be implemented into the realtime control to make the robot more robust against perception errors and irregular surfaces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.