This paper summarizes recent activities carried out for the development of an innovative anthropomorphic robotic hand called the DEXMART Hand. The main goal of this research is to face the problems that affect current robotic hands by introducing suitable design solutions aimed at achieving simplification and cost reduction while possibly enhancing robustness and performance. While certain aspects of the DEXMART Hand development have been presented in previous papers, this paper is the first to give a comprehensive description of the final hand version and its use to replicate human-like grasping. In this paper, particular emphasis is placed on the kinematics of the fingers and of the thumb, the wrist architecture, the dimensioning of the actuation system, and the final implementation of the position, force and tactile sensors. The paper focuses also on how these solutions have been integrated into the mechanical structure of this innovative robotic hand to enable precise force and displacement control of the whole system. Another important aspect is the lack of suitable control tools that severely limits the development of robotic hand applications. To address this issue, a new method for the observation of human hand behavior during interaction with common day-to-day objects by means of a 3D computer vision system is presented in this work together with a strategy for mapping human hand postures to the robotic hand. A simple control strategy based on postural synergies has been used to reduce the complexity of the grasp planning problem. As a preliminary evaluation of the DEXMART Hand's capabilities, this approach has been adopted in this paper to simplify and speed up the transfer of human actions to the robotic hand, showing its effectiveness in reproducing human-like grasping
In this work, we propose a framework to deal with cross-modal visuo-tactile object recognition. By crossmodal visuo-tactile object recognition, we mean that the object recognition algorithm is trained only with visual data and is able to recognize objects leveraging only tactile perception. The proposed cross-modal framework is constituted by three main elements. The first is a unified representation of visual and tactile data, which is suitable for cross-modal perception. The second is a set of features able to encode the chosen representation for classification applications. The third is a supervised learning algorithm, which takes advantage of the chosen descriptor. In order to show the results of our approach, we performed experiments with 15 objects common in domestic and industrial environments. Moreover, we compare the performance of the proposed framework with the performance of 10 humans in a simple cross-modal recognition task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.