Novel high-resolution pressure-sensor arrays allow treating pressure readings as standard images. Computer vision algorithms and methods such as Convolutional Neural Networks (CNN) can be used to identify contact objects. In this paper, a high-resolution tactile sensor has been attached to a robotic endeffector to identify contacted objects. Two CNN-based approaches have been employed to classify pressure images. These methods include a transfer learning approach using a pre-trained CNN on an RGB-images dataset and a custom-made CNN (TactNet) trained from scratch with tactile information. The transfer learning approach can be carried out by retraining the classification layers of the network or replacing these layers with an SVM. Overall, 11 configurations based on these methods have been tested: 8 transfer learning-based, and 3 TactNet-based. Moreover, a study of the performance of the methods and a comparative discussion with the current state-of-the-art on tactile object recognition is presented.
The use of tactile perception can help first response robotic teams in disaster scenarios, where visibility conditions are often reduced due to the presence of dust, mud, or smoke, distinguishing human limbs from other objects with similar shapes. Here, the integration of the tactile sensor in adaptive grippers is evaluated, measuring the performance of an object recognition task based on deep convolutional neural networks (DCNNs) using a flexible sensor mounted in adaptive grippers. A total of 15 classes with 50 tactile images each were trained, including human body parts and common environment objects, in semi-rigid and flexible adaptive grippers based on the fin ray effect. The classifier was compared against the rigid configuration and a support vector machine classifier (SVM). Finally, a two-level output network has been proposed to provide both object-type recognition and human/non-human classification. Sensors in adaptive grippers have a higher number of non-null tactels (up to 37% more), with a lower mean of pressure values (up to 72% less) than when using a rigid sensor, with a softer grip, which is needed in physical human–robot interaction (pHRI). A semi-rigid implementation with 95.13% object recognition rate was chosen, even though the human/non-human classification had better results (98.78%) with a rigid sensor.
In this paper, a novel method of active tactile perception based on 3D neural networks and a high-resolution tactile sensor installed on a robot gripper is presented. A haptic exploratory procedure based on robotic palpation is performed to get pressure images at different grasping forces that provide information not only about the external shape of the object, but also about its internal features. The gripper consists of two underactuated fingers with a tactile sensor array in the thumb. A new representation of tactile information as 3D tactile tensors is described. During a squeeze-and-release process, the pressure images read from the tactile sensor are concatenated forming a tensor that contains information about the variation of pressure matrices along with the grasping forces. These tensors are used to feed a 3D Convolutional Neural Network (3D CNN) called 3D TactNet, which is able to classify the grasped object through active interaction. Results show that 3D CNN performs better, and provide better recognition rates with a lower number of training data.
Recent advances in the field of intelligent robotic manipulation pursue providing robotic hands with touch sensitivity. Haptic perception encompasses the sensing modalities encountered in the sense of touch (e.g., tactile and kinesthetic sensations). This letter focuses on multimodal object recognition and proposes analytical and data-driven methodologies to fuse tactile-and kinesthetic-based classification results. The procedure is as follows: a three-finger actuated gripper with an integrated high-resolution tactile sensor performs squeeze-andrelease Exploratory Procedures (EPs). The tactile images and kinesthetic information acquired using angular sensors on the finger joints constitute the time-series datasets of interest. Each temporal dataset is fed to a Long Short-term Memory (LSTM) Neural Network, which is trained to classify in-hand objects. The LSTMs provide an estimation of the posterior probability of each object given the corresponding measurements, which after fusion allows to estimate the object through Bayesian and Neural inference approaches. An experiment with 36-classes is carried out to evaluate and compare the performance of the fused, tactile, and kinesthetic perception systems. The results show that the Bayesian-based classifiers improves capabilities for object recognition and outperforms the Neural-based approach.
The objective of this paper is to develop and evaluate a directional vibrotactile feedback interface as a guidance tool for postural adjustments during work. In contrast to the existing active and wearable systems such as exoskeletons, we aim to create a lightweight and intuitive interface, capable of guiding its wearers towards more ergonomic and healthy working conditions. To achieve this, a vibrotactile device called ErgoTac is employed to develop three different feedback modalities that are able to provide a directional guidance at the body segments towards a desired pose. In addition, an evaluation is made to find the most suitable, comfortable, and intuitive feedback modality for the user. Therefore, these modalities are first compared experimentally on fifteen subjects wearing eight ErgoTac devices to achieve targeted arm and torso configurations. The most effective directional feedback modality is then evaluated on five subjects in a set of experiments in which an ergonomic optimisation module provides the optimised body posture while performing heavy lifting or forceful exertion tasks. The results yield strong evidence on the usefulness and the intuitiveness of one of the developed modalities in providing guidance towards ergonomic working conditions, by minimising the effect of an external load on body joints. We believe that the integration of such lowcost devices in workplaces can help address the well-known and complex problem of work-related musculoskeletal disorders.
A new robotic system for Search And Rescue (SAR) operations based on the automatic wristband placement on the victims' arm, which may provide identification, beaconing and remote sensor readings for continuous health monitoring. This paper focuses on the development of the automatic target localization and the device placement using an unmanned aerial manipulator. The automatic wrist detection and localization system uses an RGB-D camera and a convolutional neural network based on the region faster method (Faster R-CNN). A lightweight parallel delta manipulator with a large workspace has been built, and a new design of a wristband in the form of a passive detachable gripper, is presented, which under contact, automatically attaches to the human, while disengages from the manipulator. A new trajectory planning method has been used to minimize the torques caused by the external forces during contact, which cause attitude perturbations. Experiments have been done to evaluate the machine learning method for detection and location, and for the assessment of the performance of the trajectory planning method. The results show how the VGG-16 neural network provides a detection accuracy of 67.99%. Moreover, simulation experiments have been done to show that the new trajectories minimize the perturbations to the aerial platform.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.