a b s t r a c tThis overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands. Robotic grasping has been an active research subject for decades, and a great deal of effort has been spent on grasp synthesis algorithms. Existing papers focus on reviewing the mechanics of grasping and the finger-object contact interactions Bicchi and Kumar (2000) [12] or robot hand design and their control Al-Gallaf et al. (1993) [70]. Robot grasp synthesis algorithms have been reviewed in Shimoga (1996) [71], but since then an important progress has been made toward applying learning techniques to the grasping problem. This overview focuses on analytical as well as empirical grasp synthesis approaches.
This paper deals with motion planning for robots manipulating movable objects among obstacles. We propose a general manipulation planning approach capable of addressing continuous sets for modeling both the possible grasps and the stable placements of the movable object, rather than discrete sets generally assumed by the previous approaches. The proposed algorithm relies on a topological property that characterizes the existence of solutions in the subspace of configurations where the robot grasps the object placed at a stable position. It allows us to devise a manipulation planner that captures in a probabilistic roadmap the connectivity of sub-dimensional manifolds of the composite configuration space. Experiments conducted with the planner in simulated environments demonstrate its efficacy to solve complex manipulation problems.
Upper-limb impairment after stroke is caused by weakness, loss of individual joint control, spasticity, and abnormal synergies. Upper-limb movement frequently involves abnormal, stereotyped, and fixed synergies, likely related to the increased use of sub-cortical networks following the stroke. The flexible coordination of the shoulder and elbow joints is also disrupted. New methods for motor learning, based on the stimulation of activity-dependent neural plasticity have been developed. These include robots that can adaptively assist active movements and generate many movement repetitions. However, most of these robots only control the movement of the hand in space. The aim of the present text is to analyze the potential of robotic exoskeletons to specifically rehabilitate joint motion and particularly inter-joint coordination. First, a review of studies on upper-limb coordination in stroke patients is presented and the potential for recovery of coordination is examined. Second, issues relating to the mechanical design of exoskeletons and the transmission of constraints between the robotic and human limbs are discussed. The third section considers the development of different methods to control exoskeletons: existing rehabilitation devices and approaches to the control and rehabilitation of joint coordinations are then reviewed, along with preliminary clinical results available. Finally, perspectives and future strategies for the design of control mechanisms for rehabilitation exoskeletons are discussed.
BackgroundHand synergies have been extensively studied over the last few decades. Objectives of such research are numerous. In neuroscience, the aim is to improve the understanding of motor control and its ability to reduce the control dimensionality. In applied research fields like robotics the aim is to build biomimetic hand structures, or in prosthetics to design more performant underactuated replacement hands. Nevertheless, most of the synergy schemes identified to this day have been obtained from grasping experiments performed with one single (generally dominant) hand to objects placed in a given position and orientation in space. Aiming at identifying more generic synergies, we conducted similar experiments on postural synergy identification during bimanual manipulation of various objects in order to avoid the factors due to the extrinsic spatial position of the objects.MethodsTen healthy naive subjects were asked to perform a selected “grasp-give-receive” task with both hands using 9 objects. Subjects were wearing Cyberglove Ⓒ on both hands, allowing a measurement of the joint posture (15 degrees of freedom) of each hand. Postural synergies were then evaluated through Principal Component Analysis (PCA). Matches between the identified Principal Components and the human hand joints were analyzed thanks to the correlation matrix. Finally, statistical analysis was performed on the data in order to evaluate the effect of some specific variables on the hand synergies: object shape, hand side (i.e., laterality) and role (giving or receiving hand).ResultsResults on PCs are consistent with previous literature showing that a few principal components might be sufficient to describe a large variety of different grasps. Nevertheless some simple and strong correlations between PCs and clearly identified sets of hand joints were obtained in this study. In addition, these groupings of DoF corresponds to well-defined anatomo-functional finger joints according to muscle groups. Moreover, despite our protocol encouraging symmetric grasping, some right-left side differences were observed.ConclusionThe set of identified synergies presented here should be more representative of hand synergies in general since they are based on both hands motion. Preliminary results, that should be deepened, also highlight the influence of hand dominance and side. Thanks to their strong correlation with anatomo-functional joints, these synergies could therefore be used to design underactuated robotics hands.
The aim of this article was to explore how an upper limb exoskeleton can be programmed to impose specific joint coordination patterns during rehabilitation. Based on rationale which emphasizes the importance of the quality of movement coordination in the motor relearning process, a robot controller was developed with the aim of reproducing the individual corrections imposed by a physical therapist on a hemiparetic patient during pointing movements. The approach exploits a description of the joint synergies using Principal Component Analysis (PCA) on joint velocities. This mathematical tool is used both to characterize the patient's movements, with or without the assistance of a physical therapist, and to program the exoskeleton during active-assisted exercises. An original feature of this controller is that the hand trajectory is not imposed on the patient: only the coordination law is modified. Experiments with hemiparetic patients using this new activeassisted mode were conducted. Obtained results demonstrate that the desired inter-joint coordination was successfully enforced, without significantly modifying the trajectory of the end point.
Abstract-In this paper, we propose a new method for the motion planning problem of rigid object dexterous manipulation with a robotic multi-fingered hand, under quasi-static movement assumption. This method computes both object and finger trajectories as well as the finger relocation sequence. Its specificity is to use a special structuring of the research space that allows to search for paths directly in the particular subspace GSn which is the subspace of all the grasps that can be achieved with n grasping fingers. The solving of the dexterous manipulation planning problem is based upon the exploration of this subspace. The proposed approach captures the connectivity of GSn in a graph structure. The answer of the manipulation planning query is then given by searching a path in the computed graph. Simulation experiments were conducted for different dexterous manipulation task examples to validate the proposed method.
This paper proposes a novel strategy for grasping 3D unknown objects in accordance with their corresponding task. We define the handle or the natural grasping component of an object as the part chosen by humans to pick this object with. When humans reach out to grasp an object, it is generally in the aim of accomplishing a task. Thus, the chosen grasp is quite related to the object task. Our approach learns to identify objects handles by imitating humans. In this paper, a new sufficient condition for computing force-closure grasps on the obtained handle is also proposed. Several experiments were conducted to test the ability of the algorithm to generalize to new objects. They also show the adaptability of our strategy to the hand kinematics.
Abstract-A grasp is the beginning of any manipulation task. Therefore, an autonomous robot should be able to grasp objects it sees for the first time. It must hold objects appropriately in order to successfully perform the task. This paper considers the problem of grasping unknown objects in the same manner as humans. Based on the idea that the human brain represents objects as volumetric primitives in order to recognize them, the presented algorithm predicts grasp as a function of the object's parts assembly. Beginning with a complete 3D model of the object, a segmentation step decomposes it into single parts. Each single part is fitted with a simple geometric model. A learning step is finally needed in order to find the object component that humans choose to grasp it.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.