In this paper we describe a learning framework for a central pattern generator (CPG)-based biped locomotion controller using a policy gradient method. Our goals in this study are to achieve CPG-based biped walking with a 3D hardware humanoid and to develop an efficient learning algorithm with CPG by reducing the dimensionality of the state space used for learning. We demonstrate that an appropriate feedback controller can be acquired within a few thousand trials by numerical simulations and the controller obtained in numerical simulation achieves stable walking with a physical robot in the real world. Numerical simulations and hardware experiments evaluate the walking velocity and stability. The results suggest that the learning algorithm is capable of adapting to environmental changes. Furthermore, we present an online learning scheme with an initial policy for a hardware robot to improve the controller within 200 iterations.
In this study, we propose a multiuser myoelectric interface that can easily adapt to novel users. When a user performs different motions (e.g., grasping and pinching), different electromyography (EMG) signals are measured. When different users perform the same motion (e.g., grasping), different EMG signals are also measured. Therefore, designing a myoelectric interface that can be used by multiple users to perform multiple motions is difficult. To cope with this problem, we propose for EMG signals a bilinear model that is composed of two linear factors: 1) user dependent and 2) motion dependent. By decomposing the EMG signals into these two factors, the extracted motion-dependent factors can be used as user-independent features. We can construct a motion classifier on the extracted feature space to develop the multiuser interface. For novel users, the proposed adaptation method estimates the user-dependent factor through only a few interactions. The bilinear EMG model with the estimated user-dependent factor can extract the user-independent features from the novel user data. We applied our proposed method to a recognition task of five hand gestures for robotic hand control using four-channel EMG signals measured from subject forearms. Our method resulted in 73% accuracy, which was statistically significantly different from the accuracy of standard nonmultiuser interfaces, as the result of a two-sample t -test at a significance level of 1%.
Abstract-This study aims at robotic clothing assistance as it is yet an open field for robotics despite it is one of the basic and important assistance activities in daily life of elderly as well as disabled people. The clothing assistance is a challenging problem since robots must interact with non-rigid clothes generally represented in a high-dimensional space, and with the assisted person whose posture can vary during the assistance. Thus, the robot is required to manage two difficulties to perform the task of the clothing assistance: 1) handling of non-rigid materials and 2) adaptation of the assisting movements to the assisted person's posture. To overcome these difficulties, we propose to use reinforcement learning with the cloth's state which is low-dimensionally represented in topology coordinates, and with the reward defined in the low-dimensional coordinates. With our developed experimental system, for T-shirt clothing assistance, including an anthropomorphic dual-arm robot and a soft mannequin, we demonstrate the robot quickly learns a suitable arm motion for putting the mannequin's head into a T-shirt.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.