The human hand is a complex, highly-articulated system, which has been the source of inspiration in designing humanoid robotic and prosthetic hands. Understanding the functionality of the human hand is crucial for the design, efficient control and transfer of human versatility and dexterity to such anthropomorphic robotic hands. Although research in this area has made significant advances, the synthesis of grasp configurations, based on observed human grasping data, is still an unsolved and challenging task. In this work we derive a novel, constrained autoencoder model, that encodes human grasping data in a compact representation. This representation encodes both the grasp type in a three-dimensional latent space and the object size as an explicit parameter constraint allowing the direct synthesis of object-specific grasps. We train the model on 2250 grasps generated by 15 subjects using 35 diverse objects from the KIT and YCB object sets. In the evaluation we show that the synthesized grasp configurations are human-like and have a high probability of success under pose uncertainty.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.