Abstract-Touch sensing can help robots understand their surrounding environment, and in particular the objects they interact with. To this end, roboticists have, in the last few decades, developed several tactile sensing solutions, extensively reported in the literature. Research into interpreting the conveyed tactile information has also started to attract increasing attention in recent years. However, a comprehensive study on this topic is yet to be reported. In an effort to collect and summarize the major scientific achievements in the area, this survey extensively reviews current trends in robot tactile perception of object properties. Available tactile sensing technologies are briefly presented before an extensive review on tactile recognition of object properties. The object properties that are targeted by this review are shape, surface material and object pose. The role of touch sensing in combination with other sensing sources is also discussed. In this review, open issues are identified and future directions for applying tactile sensing in different tasks are suggested.
Object surface properties are among the most important information which a robot requires in order to effectively interact with an unknown environment. This paper presents a novel haptic exploration strategy for recognizing the physical properties of unknown object surfaces using an intelligent finger. This developed intelligent finger is capable of identifying the contact location, normal and tangential force, and the vibrations generated from the contact in real time. In the proposed strategy, this finger gently slides along the surface with a short stroke while increasing and decreasing the sliding velocity. By applying a dynamic friction model to describe this contact, rich and accurate surface physical properties can be identified within this stroke. This allows different surface materials to be easily distinguished even if when they have very similar texture. Several supervised learning algorithms have been applied and compared for surface recognition based on the obtained surface properties. It has been found that the naïve Bayes classifier is superior to radial basis function network and k-NN method, achieving an overall classification accuracy of 88.5% for distinguishing twelve different surface materials.
If citing, it is advised that you check and use the publisher's definitive version for pagination, volume/issue, and date of publication details. And where the final published version is provided on the Research Portal, if citing you are again advised to check the publisher's website for any subsequent corrections.
Abstract-Robotic teleoperation in cluttered environments is attracting increasing attention for its potential in hazardous scenarios, disaster response, and telemaintenance. Although haptic feedback has been proven effective in such applications, commercially-available grounded haptic interfaces still show significant limitations in terms of workspace, safety, transparency, and encumbrance. For this reason, we present a novel robotic teleoperation system with wearable haptic feedback for telemanipulation in cluttered environments. The slave system is composed of a soft robotic hand attached to a 6-axis force sensor, which is fixed to a 6-degrees-of-freedom robotic arm. The master system is composed of two wearable vibrotactile armbands and a Leap Motion. The armbands are worn on the upper arm and forearm, and convey information about collisions on the robotic arm and hand, respectively. The position of the manipulator and the grasping configuration of the robotic hand are controlled by the user's hand pose as tracked by the Leap Motion. To validate our approach, we carried out a human-subject telemanipulation experiment in a cluttered scenario. Twelve participants were asked to teleoperate the robot to grasp an object hidden between debris of various shapes and stiffnesses. Haptic feedback provided by our wearable devices significantly improved the performance of the considered telemanipulation tasks. All subjects but one preferred conditions with wearable haptic feedback.
We present a wearable skin stretch device for the forearm. It is composed of four cylindrical end effectors, evenly distributed around the user's forearm. They can generate independent skin stretch stimuli at the palmar, dorsal, ulnar, and radial sides of the arm. When the four end effectors rotate in the same direction, the wearable device provides cutaneous stimuli about a desired pronation/supination of the forearm. On the other hand, when two opposite end effectors rotate in different directions, the cutaneous device provides cutaneous stimuli about a desired translation of the forearm. To evaluate the effectiveness of our device in providing navigation information, we carried out two experiments of haptic navigation. In the first one, subjects were asked to translate and rotate the forearm toward a target position and orientation, respectively. In the second experiment, subjects were asked to control a 6-DoF robotic manipulator to grasp and lift a target object. Haptic feedback provided by our wearable device improved the performance of both experiments with respect to providing no haptic feedback. Moreover, it showed similar performance with respect to sensory substitution via visual feedback, without overloading the visual channel.
Robot grasping and manipulation relies mainly on two types of sensory data: vision and tactile sensing. Localisation and recognition of the object is typically done through vision alone, while tactile sensors are commonly used for grasp control. Vision performs reliably in uncluttered environments, but its performance may deteriorate when the object is occluded, which is often the case during a manipulation task, when the object is in-hand and the robot fingers stand between the camera and the object.This paper presents a method to use the robot's sense of touch to refine the knowledge of a manipulated object's pose from an initial estimate provided by vision. The objective is to find a transformation on the object's location that is coherent with the current proprioceptive and tactile sensory data. The method was tested with different object geometries and proposes applications where this method can be used to improve the overall performance of a robotic system. Experimental results show an improvement of around 70% on the estimate of the object's location when compared to using only vision.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.