A novel method of tactile communication among human-robot and robot-robot collaborative teams is developed for the purpose of adaptive grasp control of dexterous robotic hands. Neural networks are applied to the problem of classifying the direction objects slide against different tactile fingertip sensors in real-time. This ability to classify the direction that an object slides in a dexterous robotic hand was used for adaptive grasp synergy control to afford context dependent robotic reflexes in response to the direction of grasped object slip. Case studies with robot-robot and human-robot collaborative teams successfully demonstrated the feasibility; when object slip in the direction of gravity (towards the ground) was detected, the dexterous hand increased the grasp force to prevent dropping the object. When a human or robot applied an upward force to cause the grasped object to slip upward, the dexterous hand was programmed to release the object into the hand of the other team member. This method of adaptive grasp control using direction of slip detection can improve the efficiency of human-robot and robot-robot teams. I. INTRODUCTIONShortly after the advent of the artificial robotic hand, preventing grasped objects from being inadvertently dropped has been a priority [1]. This is a very important problem in the area of prosthetic hands, as limb-absent people do not have a direct sense of the grasp force applied
Loss of tactile sensations is a major roadblock preventing upper limb-absent people from multitasking or using the full dexterity of their prosthetic hands. With current myoelectric prosthetic hands, limb-absent people can only control one grasp function at a time even though modern artificial hands are mechanically capable of individual control of all five digits. In this paper, we investigated whether people could precisely control the grip forces applied to two different objects grasped simultaneously with a dexterous artificial hand. Toward that end, we developed a novel multichannel wearable soft robotic armband to convey artificial sensations of touch to the robotic hand users. Multiple channels of haptic feedback enabled subjects to successfully grasp and transport two objects simultaneously with the dexterous artificial hand without breaking or dropping them, even when their vision of both objects was obstructed. Simultaneous transport of the objects provided a significant time savings to perform the deliveries in comparison to a one-at-a-time approach. This paper demonstrated that subjects were able to integrate multiple channels of haptic feedback into their motor control strategies to perform a complex simultaneous object grasp control task with an artificial limb, which could serve as a paradigm shift in the way prosthetic hands are operated.
The haptic sense relies upon a plurality of receptors and pathways to produce a complex perceptual experience of contact, pressure, taps, vibrations and flutters. This complexity is yet to be reproduced in haptic feedback interfaces that are used by people controlling a dexterous robotic hand, be it for limb-absence or teleoperation. The goal of the present bimodal haptic armband is to convey both low-frequency pressure changes and high-frequency vibrations from a dexterous robotic hand to a human's upper arm, so as to guide his/her control of the artificial limb. To that end, we design and manufacture four novel soft robotic armbands combining inflatable air chambers and vibrotactile stimulators. We develop control systems for both pathways. We conduct a series of benchtop tests to determine the pneumatic and vibrotactile performance and select from competing designs and materials. We test two of the resulting bimodal haptic armband on human subjects and confirm their ability to use both aspects of this haptic information. Arguing that dexterous artificial hands are presently not used to their fullest capability by the dearth of haptic information in users, this work aims to achieve a more realistic tactile experience for a fluent, more natural usage of robotic artificial hands.
Multifunctional flexible tactile sensors could be useful to improve the control of prosthetic hands. To that end, highly stretchable liquid metal tactile sensors (LMS) were designed, manufactured via photolithography, and incorporated into the fingertips of a prosthetic hand. Three novel contributions were made with the LMS. First, individual fingertips were used to distinguish between different speeds of sliding contact with different surfaces. Second, differences in surface textures were reliably detected during sliding contact. Third, the capacity for hierarchical tactile sensor integration was demonstrated by using four LMS signals simultaneously to distinguish between ten complex multi-textured surfaces. Four different machine learning algorithms were compared for their successful classification capabilities: K-nearest neighbor (KNN), support vector machine (SVM), random forest (RF), and neural network (NN). The time-frequency features of the LMSs were extracted to train and test the machine learning algorithms. The NN generally performed the best at the speed and texture detection with a single finger and had a 99.2 ± 0.8% accuracy to distinguish between ten different multi-textured surfaces using four LMSs from four fingers simultaneously. The capability for hierarchical multi-finger tactile sensation integration could be useful to provide a higher level of intelligence for artificial hands.
Cervical disc implants are conventional surgical treatments for patients with degenerative disc disease, such as cervical myelopathy and radiculopathy. However, the surgeon still must determine the candidacy of cervical disc implants mainly from the findings of diagnostic imaging studies, which can sometimes lead to complications and implant failure. To help address these problems, a new approach was developed to enable surgeons to preview the post-operative effects of an artificial disc implant in a patient-specific fashion prior to surgery. To that end, a robotic replica of a person’s spine was 3D printed, modified to include an artificial disc implant, and outfitted with a soft magnetic sensor array. The aims of this study are threefold: first, to evaluate the potential of a soft magnetic sensor array to detect the location and amplitude of applied loads; second, to use the soft magnetic sensor array in a 3D printed human spine replica to distinguish between five different robotically actuated postures; and third, to compare the efficacy of four different machine learning algorithms to classify the loads, amplitudes, and postures obtained from the first and second aims. Benchtop experiments showed that the soft magnetic sensor array was capable of precisely detecting the location and amplitude of forces, which were successfully classified by four different machine learning algorithms that were compared for their capabilities: Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Random Forest (RF), and Artificial Neural Network (ANN). In particular, the RF and ANN algorithms were able to classify locations of loads applied 3.25 mm apart with 98.39% ± 1.50% and 98.05% ± 1.56% accuracies, respectively. Furthermore, the ANN had an accuracy of 94.46% ± 2.84% to classify the location that a 10 g load was applied. The artificial disc-implanted spine replica was subjected to flexion and extension by a robotic arm. Five different postures of the spine were successfully classified with 100% ± 0.0% accuracy with the ANN using the soft magnetic sensor array. All results indicated that the magnetic sensor array has promising potential to generate data prior to invasive surgeries that could be utilized to preoperatively assess the suitability of a particular intervention for specific patients and to potentially assist the postoperative care of people with cervical disc implants.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.