Abstract-In this paper, we present a system that enables humanoid robots to imitate complex whole-body motions of humans in real time. In our approach, we use a compact human model and consider the positions of the endeffectors as well as the center of mass as the most important aspects to imitate. Our system actively balances the center of mass over the support polygon to avoid falls of the robot, which would occur when using direct imitation. For every point in time, our approach generates a statically stable pose. Hereby, we do not constrain the configurations to be in double support. Instead, we allow for changes of the support mode according to the motions to imitate. To achieve safe imitation, we use retargeting of the robot's feet if necessary and find statically stable configurations by inverse kinematics. We present experiments using human data captured with an Xsens MVN motion capture system. The results show that a Nao humanoid is able to reliably imitate complex whole-body motions in real time, which also include extended periods of time in single support mode, in which the robot has to balance on one foot.
Abstract-Humanoid service robots performing complex object manipulation tasks need to plan whole-body motions that satisfy a variety of constraints: The robot must keep its balance, self-collisions and collisions with obstacles in the environment must be avoided and, if applicable, the trajectory of the end-effector must follow the constrained motion of a manipulated object in Cartesian space. These constraints and the high number of degrees of freedom make wholebody motion planning for humanoids a challenging problem. In this paper, we present an approach to whole-body motion planning with a focus on the manipulation of articulated objects such as doors and drawers. Our approach is based on rapidly-exploring random trees in combination with inverse kinematics and considers all required constraints during the search. Models of articulated objects hereby generate hand poses for sampled configurations along the trajectory of the object handle. We thoroughly evaluated our planning system and present experiments with a Nao humanoid opening a drawer, a door, and picking up an object. The experiments demonstrate the ability of our framework to generate solutions to complex planning problems and furthermore show that these plans can be reliably executed even on a low-cost humanoid platform.
As autonomous service robots become more affordable and thus available also for the general public, there is a growing need for user friendly interfaces to control the robotic system. Currently available control modalities typically expect users to be able to express their desire through either touch, speech or gesture commands. While this requirement is fulfilled for the majority of users, paralyzed users may not be able to use such systems. In this paper, we present a novel framework, that allows these users to interact with a robotic service assistant in a closed-loop fashion, using only thoughts. The brain-computer interface (BCI) system is composed of several interacting components, i.e., non-invasive neuronal signal recording and decoding, high-level task planning, motion and manipulation planning as well as environment perception.In various experiments, we demonstrate its applicability and robustness in real world scenarios, considering fetch-and-carry tasks and tasks involving human-robot interaction. As our results demonstrate, our system is capable of adapting to frequent changes in the environment and reliably completing given tasks within a reasonable amount of time. Combined with high-level planning and autonomous robotic systems, interesting new perspectives open up for non-invasive BCI-based humanrobot interactions.
Although the neurological impairments of Parkinson's disease (PD) patients are well known to go along with motor control deficits, e.g., tremor, rigidity, and reduced movement, not much is known about the motor control parameters affected by the disease. In this paper, we therefore present a novel approach to human motions analysis using motor control strategies with joint weight parameterization. We record the motions of healthy subjects and PD patients performing a hand coordination task with the whole-body XSens MVN motion capture system. For our motion strategy analysis we then follow a two step approach. First, we perform a complexity reduction by mapping the recorded human motions to a simplified kinematic model of the upper body. Second, we reproduce the recorded motions using a Jacobian weighted damped least squares controller with adaptive joint weights. We developed a method to iteratively learn the joint weights of the controller with the mapped human joint trajectories as reference input. Finally, we use the learned joint weights for a quantitative comparison between the motion control strategies of healthy subjects and PD patients. Other than expected from clinical experience, we found that the joint weights are almost evenly distributed along the arm in the PD group. In contrast to that, the proximal joint weights of the healthy subjects are notably larger than the distal ones.
As autonomous service robots become more affordable and thus available for the general public, there is a growing need for user-friendly interfaces to control these systems. Control interfaces typically get more complicated with increasing complexity of the robotic tasks and the environment. Traditional control modalities as touch, speech or gesture commands are not necessarily suited for all users. While non-expert users can make the effort to familiarize themselves with a robotic system, paralyzed users may not be capable of controlling such systems even though they need robotic assistance most. In this paper, we present a novel framework, that allows these users to interact with a robotic service assistant in a closed-loop fashion, using only thoughts. The system is composed of several interacting components: non-invasive neuronal signal recording and co-adaptive deep learning which form the brain-computer interface (BCI), high-level task planning based on referring expressions, navigation and manipulation planning as well as environmental perception. We extensively evaluate the BCI in various tasks, determine the performance of the goal formulation user interface and investigate its intuitiveness in a user study. Furthermore, we demonstrate the applicability and robustness of the system in real world scenarios, considering fetch-and-carry tasks and tasks involving human-robot interaction. As our results show, the system is capable of adapting to frequent changes in the environment and reliably accomplishes given tasks within a reasonable amount of time. Combined with high-level planning using referring expressions and autonomous robotic systems, interesting new perspectives open up for non-invasive BCI-based human-robot interactions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.