“…They used a deep model to learn to manipulate glass marbles from raw tactile inputs toward desired target positions. Sun et al 60 designed a tactile sensor that can measure surface texture, force, and thermal information to enhance the performance of the tactile sensor. Similarly, by using the image feature of deformation and machine learning methods, Baimukashev et al 61 presented an optical tactile sensor that can detect shear force, tension, and pressure.…”
Section: Extrinsic Sensor For Robotic Handsmentioning
confidence: 99%
“…The tactile perception of robots is gradually developing from simple force perception to multimodal perception 60 and from little contact range to a larger coverage range.…”
Section: Extrinsic Sensor For Robotic Handsmentioning
Sensory perception for dexterous robotic hands is an active research area and recent progress in robotics. Effective dexterous manipulation requires robotic hands to accurately feedback their state or perceive the surrounding environment. This article reviews the state-of-the-art of sensory perception for dexterous robotic manipulation. Two types of sensors, such as intrinsic and extrinsic sensors, are introduced according to their function and layout in robotic hands. These sensors provide rich information to a robotic hand, which contains the posture, the contact information of objects, and the physical information of the environment. Then, a comprehensive analysis of perception methods including planning-level, control-level, and learning-level perceptions is presented. The information obtained from sensory perception can help robotic hands to make decisions effectively. Previously issued reviews mainly focus on the design of tactile senor, while we analyze and discuss the relationship among sensing, perception, and dexterous manipulation. Some potential research topics on sensory perception are also summarized and discussed.
“…They used a deep model to learn to manipulate glass marbles from raw tactile inputs toward desired target positions. Sun et al 60 designed a tactile sensor that can measure surface texture, force, and thermal information to enhance the performance of the tactile sensor. Similarly, by using the image feature of deformation and machine learning methods, Baimukashev et al 61 presented an optical tactile sensor that can detect shear force, tension, and pressure.…”
Section: Extrinsic Sensor For Robotic Handsmentioning
confidence: 99%
“…The tactile perception of robots is gradually developing from simple force perception to multimodal perception 60 and from little contact range to a larger coverage range.…”
Section: Extrinsic Sensor For Robotic Handsmentioning
Sensory perception for dexterous robotic hands is an active research area and recent progress in robotics. Effective dexterous manipulation requires robotic hands to accurately feedback their state or perceive the surrounding environment. This article reviews the state-of-the-art of sensory perception for dexterous robotic manipulation. Two types of sensors, such as intrinsic and extrinsic sensors, are introduced according to their function and layout in robotic hands. These sensors provide rich information to a robotic hand, which contains the posture, the contact information of objects, and the physical information of the environment. Then, a comprehensive analysis of perception methods including planning-level, control-level, and learning-level perceptions is presented. The information obtained from sensory perception can help robotic hands to make decisions effectively. Previously issued reviews mainly focus on the design of tactile senor, while we analyze and discuss the relationship among sensing, perception, and dexterous manipulation. Some potential research topics on sensory perception are also summarized and discussed.
“…Considering space limitations, the supporting structure is shared with the palm. Different from our previous sensors (Fang et al , 2019; Sun et al , 2019), the material and design are optimized to adapt a soft hand. First, the mixing ratio is adjusted to decrease the elastomer hardness because softer properties enhance the deformation degree.…”
Purpose
The purpose of this paper is to present a novel tactile sensor and a visual-tactile recognition framework to reduce the uncertainty of the visual recognition of transparent objects.
Design/methodology/approach
A multitask learning model is used to recognize intuitive appearance attributes except texture in the visual mode. Tactile mode adopts a novel vision-based tactile sensor via the level-regional feature extraction network (LRFE-Net) recognition framework to acquire high-resolution texture information and temperature information. Finally, the attribute results of the two modes are integrated based on integration rules.
Findings
The recognition accuracy of attributes, such as style, handle, transparency and temperature, is near 100%, and the texture recognition accuracy is 98.75%. The experimental results demonstrate that the proposed framework with a vision-based tactile sensor can improve attribute recognition.
Originality/value
Transparency and visual differences make the texture of transparent glass hard to recognize. Vision-based tactile sensors can improve the texture recognition effect and acquire additional attributes. Integrating visual and tactile information is beneficial to acquiring complete attribute features.
“…For robotic operation, vision-based sensing technology has been widely applied for various tasks, such as object detection [6], object tracking [7], object grasping [8], and navigation [9]. Additionally, robots with haptic sensors (e.g., accelerometer, gyroscope, thermochromic-based tactile sensor [10], GelSight sensor [11], etc.) could perform the touch-related tasks, such as texture recognition [12] and grasping objects in different shapes [13] and hardness [14].…”
Existing psychophysical studies have revealed that the cross-modal visual-tactile perception is common for humans performing daily activities. However, it is still challenging to build the algorithmic mapping from one modality space to another, namely the cross-modal visual-tactile data translation/generation, which could be potentially important for robotic operation. In this paper, we propose a deep-learning-based approach for cross-modal visualtactile data generation by leveraging the framework of the generative adversarial networks (GANs). Our approach takes the visual image of a material surface as the visual data, and the accelerometer signal induced by the pen-sliding movement on the surface as the tactile data. We adopt the conditional-GAN (cGAN) structure together with the residue-fusion (RF) module, and train the model with the additional feature-matching (FM) and perceptual losses to achieve the cross-modal data generation. The experimental results show that the inclusion of the RF module, and the FM and the perceptual losses significantly improves cross-modal data generation performance in terms of the classification accuracy upon the generated data and the visual similarity between the ground-truth and the generated data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.