Humans are able to intuitively exploit the shape of an object and environmental constraints to achieve stable grasps and perform dexterous manipulations. In doing that, a vast range of kinematic strategies can be observed. However, in this work we formulate the hypothesis that such ability can be described in terms of a synergistic behavior in the generation of hand postures, i.e., using a reduced set of commonly used kinematic patterns. This is in analogy with previous studies showing the presence of such behavior in different tasks, such as grasping. We investigated this hypothesis in experiments performed by six subjects, who were asked to grasp objects from a flat surface. We quantitatively characterized hand posture behavior from a kinematic perspective, i.e., the hand joint angles, in both pre-shaping and during the interaction with the environment. To determine the role of tactile feedback, we repeated the same experiments but with subjects wearing a rigid shell on the fingertips to reduce cutaneous afferent inputs. Results show the persistence of at least two postural synergies in all the considered experimental conditions and phases. Tactile impairment does not alter significantly the first two synergies, and contact with the environment generates a change only for higher order Principal Components. A good match also arises between the first synergy found in our analysis and the first synergy of grasping as quantified by previous work. The present study is motivated by the interest of learning from the human example, extracting lessons that can be applied in robot design and control. Thus, we conclude with a discussion on implications for robotics of our findings.
Abstract-In recent years, wearability has become a new fundamental requirement for an effective and light-weight design of the human-robot interfaces. Among the different application fields, robotic tele-operation represents the ideal scenario that can largely benefit from the wearable paradigm, in order to reduce constraints to human workspace (acting as a master) and to enable an intuitive and simplified information exchange within the tele-operator system. This effective simplification is particularly important if we consider the interaction with synergy-inspired robotic devices, i.e. endowed with a reduced number of control inputs and sensors, with the goal of maintaining a simple control and communication between humans and robots. In this work, we present an integrated approach for augmented tele-operation where wearable hand/arm pose under-sensing and haptic feedback devices are combined with teleimpedance techniques, for a simplified yet effective position and stiffness control of a synergyinspired robotic manipulator in real-time. The slave robot consists of a Kuka lightweight robotic arm equipped with the Pisa/IIT SoftHand, both controlled in impedance to perform a drilling task, an illustrative example of dynamic tasks with environmental constraints. Experimental results on ten healthy subjects suggest that the proposed integrated interface enables the master to appropriately regulate the stiffness and pose of the robotic hand-arm system through the perception of interaction forces and vision, contributing to successful and intuitive executions of the remote task. The achieved performance is presented in comparison to the reduced versions of the integrated system, in which either teleimpedance control or wearable feedback are excluded.
Despite the importance of softness, there is no evidence of wearable haptic systems able to deliver controllable softness cues. Here, we present the Wearable Fabric Yielding Display (W-FYD), a fabric-based display for multi-cue delivery that can be worn on a user's finger and enables, for the first time, both active and passive softness exploration. It can also induce a sliding effect under the finger-pad. A given stiffness profile can be obtained by modulating the stretching state of the fabric through two motors. Furthermore, a lifting mechanism allows to put the fabric in contact with the user's finger-pad, to enable passive softness rendering. In this paper, we describe the architecture of W-FYD, and a thorough characterization of its stiffness workspace, frequency response, and softness rendering capabilities. We also computed device Just Noticeable Difference in both active and passive exploratory conditions, for linear and non-linear stiffness rendering as well as for sliding direction perception. The effect of device weight was also considered. Furthermore, performance of participants and their subjective quantitative evaluation in detecting sliding direction and softness discrimination tasks are reported. Finally, applications of W-FYD in tactile augmented reality for open palpation are discussed, opening interesting perspectives in many fields of human-machine interaction.
Myoelectric artificial limbs can significantly advance the state of the art in prosthetics, since they can be used to control mechatronic devices through muscular activity in a way that mimics how the subjects used to activate their muscles before limb loss. However, surveys indicate that dissatisfaction with the functionality of terminal devices underlies the widespread abandonment of prostheses. We believe that one key factor to improve acceptability of prosthetic devices is to attain human likeness of prosthesis movements, a goal which is being pursued by research on social and human–robot interactions. Therefore, to reduce early abandonment of terminal devices, we propose that controllers should be designed so as to ensure effective task accomplishment in a natural fashion. In this work, we have analyzed and compared the performance of three types of myoelectric controller algorithms based on surface electromyography to control an underactuated and multi-degrees of freedom prosthetic hand, the SoftHand Pro. The goal of the present study was to identify the myoelectric algorithm that best mimics the native hand movements. As a preliminary step, we first quantified the repeatability of the SoftHand Pro finger movements and identified the electromyographic recording sites for able-bodied individuals with the highest signal-to-noise ratio from two pairs of muscles, i.e., flexor digitorum superficialis/extensor digitorum communis, and flexor carpi radialis/extensor carpi ulnaris. Able-bodied volunteers were then asked to execute reach-to-grasp movements, while electromyography signals were recorded from flexor digitorum superficialis/extensor digitorum communis as this was identified as the muscle pair characterized by high signal-to-noise ratio and intuitive control. Subsequently, we tested three myoelectric controllers that mapped electromyography signals to position of the SoftHand Pro. We found that a differential electromyography-to-position mapping ensured the highest coherence with hand movements. Our results represent a first step toward a more effective and intuitive control of myoelectric hand prostheses.
Palpation is an essential step of several open surgical procedures for locating arteries by arterial pulse detection. In this context, surgical simulation would ideally provide realistic haptic sensations to the operator. This paper presents a proof of concept implementation of tactile augmented reality for open-surgery training. The system is based on the integration of a wearable tactile device into an augmented physical simulator which allows the real time tracking of artery reproductions and the user finger and provides pulse feedback during palpation. Preliminary qualitative test showed a general consensus among surgeons regarding the realism of the arterial pulse feedback and the usefulness of tactile augmented reality in open-surgery simulators
Haptic devices have a high potential for delivering tailored training to novices. These devices can simulate forces associated with real-world tasks, or provide guidance forces that convey task completion and learning strategies. It has been shown, however, that providing both task forces and guidance forces simultaneously through the same haptic interface can lead to novices depending on guidance, being unable to demonstrate skill transfer, or learning the wrong task altogether. This work presents a novel solution whereby task forces are relayed via a kinesthetic haptic interface, while guidance forces are spatially separated through a cutaneous skin stretch modality. We explore different methods of delivering cutaneous based guidance to subjects in a dynamic trajectory following task. We next compare cutaneous guidance to kinesthetic guidance, as is traditional to spatially separated assistance. We further investigate the role of placing cutaneous guidance ipsilateral versus contralateral to the task force device. The efficacies of each guidance condition are compared by examining subject error and movement smoothness. Results show that cutaneous guidance can be as effective as kinesthetic guidance, making it a practical and cost-effective alternative for spatially separated assistance.
It is known that high frequency tactile information conveys useful cues to discriminate important contact properties for manipulation, such as first-contact and roughness. Despite this, no practical system, implementing a Modality Matching paradigm, has been developed so far to convey this information to users of upper-limb prostheses. The main obstacle to this implementation is the presence of unwanted vibrations generated by the artificial limb mechanics, which are not related to any haptic exploration task. In this work, we describe the design of a digital system which can record accelerations from the fingers of an artificial hand, and reproduce them on the user's skin through voice-coil actuators. Particular attention has been devoted to the design of the filter, needed to cancel all those vibrations measured by the sensors that do not convey information on meaningful contact events. The performance of the newly designed filter is also compared with the state of the art. Exploratory experiments with prosthesis users have identified some applications where this kind of feedback could lead to sensory-motor performance enhancement. Results show that the proposed system improves the perception of object-salient features such as first-contact events, roughness and shape.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.