We propose a general and practical planning framework for generating 3-D collision-free motions that take complex robot dynamics into account. The framework consists of two stages that are applied iteratively. In the first stage, a collision-free path is obtained through efficient geometric and kinematic samplingbased motion planning. In the second stage, the path is transformed into dynamically executable robot trajectories by dedicated dynamic motion generators. In the proposed iterative method, those dynamic trajectories are sent back again to the first stage to check for collisions. Depending on the application, temporal or spatial reshaping methods are used to treat detected collisions. Temporal reshaping adjusts the velocity, whereas spatial reshaping deforms the path itself. We demonstrate the effectiveness of the proposed method through examples of a space manipulator with highly nonlinear dynamics and a humanoid robot executing dynamic manipulation and locomotion at the same time.
This paper presents an approach to automatically compute animations for virtual (human-like and robot) characters cooperating to move bulky objects in cluttered environments. The main challenge is to deal with 3D collision avoidance while preserving the believability of the agent's behaviors. To accomplish the coordinated task, a geometric and kinematic decoupling of the system is proposed. This decomposition enables us to plan a collision-free path for a reduced system, then to animate locomotion and grasping behaviors independently, and finally to automatically tune the animation to avoid residual collisions. These three steps are applied consecutively to synthesize an animation. The different techniques used, such as probabilistic path planning, locomotion controllers, inverse kinematics and path planning for closed kinematic chains are explained, and the way to integrate them into a single scheme is described.
In this work, we propose a landmark-based navigation approach that integrates (1) high-level motion planning capabilities that take into account the landmarks position and visibility and (2) a stack of feasible visual servoing tasks based on footprints to follow. The path planner computes a collision-free path that considers sensory, geometric, and kinematic constraints that are specific to humanoid robots. Based on recent results in movement neuroscience that suggest that most humans exhibit nonholonomic constraints when walking in open spaces, the humanoid steering behavior is modeled as a differential-drive wheeled robot (DDR). The obtained paths are made of geometric primitives that are the shortest in distance in free spaces. The footprints around the path and the positions of the landmarks to which the gaze must be directed are used within a stack-of-tasks (SoT) framework to compute the whole-body motion of the humanoid. We provide some experiments that verify the effectiveness of the proposed strategy on the HRP-2 platform.
Aiming at building versatile humanoid systems, we present in this paper the real-time implementation of behaviors which integrate walking and vision to achieve general functionalities. This paper describes how real-time -or high bandwidth-cognitive processes can be obtained by combining vision with walking. The central point of our methodology is to use appropriate models to reduce the complexity of the search space. We will describe the models introduced in the different blocks of the system and their relationships: walking pattern, Self Localization and Map Building, real-time reactive vision behaviors, and planning.
This paper proposes a novel visual servoing approach to control the dynamic walk of a humanoid robot. Online visual information is given by an on-board camera. It is used to drive the robot towards a specific goal. Our work is built upon a recent reactive pattern generator that make use of model predictive control (MPC) to modify footsteps, center of mass and center of pressure trajectories to track a reference velocity. The contribution of the paper is to formulate the MPC problem considering visual feedback. We compare our approach with a scheme decoupling visual servoing and walking gait generation. Such a decoupled scheme consists of, first, computing a reference velocity from visual servoing; then, the reference velocity is the input of the pattern generator. Our MPC-based approach allows to avoid a number of limitations that appears in decoupled methods. In particular, visual constraints can be introduced directly inside the locomotion controller, while camera motions do not have to be accounted for separately. Both approaches are compared numerically and validated in simulation. Our MPC method shows a faster convergence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.