Abstract-This paper explores the connection between sensorbased perception and exploration in the context of haptic object identification. The proposed approach combines (i) object recognition from tactile appearance with (ii) purposeful haptic exploration of unknown objects to extract appearance information. The recognition component brings to bear computer vision techniques by viewing tactile sensor readings as images. We present a bag-of-features framework that uses several tactile image descriptors, some adapted from the vision domain, others novel, to estimate a probability distribution over object identity as an unknown object is explored. Haptic exploration is treated as a search problem in a continuous space to take advantage of sampling-based motion planning to explore the unknown object and construct its tactile appearance.Simulation experiments of a robot arm equipped with a haptic sensor at the end-effector provide promising validation, indicating high accuracy in identifying complex shapes from tactile information gathered during exploration. The proposed approach is also validated by using readings from actual tactile sensors to recognize real objects.
Abstract-We describe a general methodology for tracking 3-dimensional objects in monocular and stereo video that makes use of GPU-accelerated filtering and rendering in combination with machine learning techniques. The method operates on targets consisting of kinematic chains with known geometry. The tracked target is divided into one or more areas of consistent appearance. The appearance of each area is represented by a classifier trained to assign a class-conditional probability to image feature vectors. A search is then performed on the configuration space of the target to find the maximum likelihood configuration. In the search, candidate hypotheses are evaluated by rendering a 3D model of the target object and measuring its consistency with the class probability map. The method is demonstrated for tool tracking on videos from two surgical domains, as well as in a human hand-tracking task.
Humans can localize lumps in soft tissue using the distributed tactile feedback and processing afforded by the fingers and brain. This task becomes extremely difficult when the fingers are not in direct contact with the tissue, such as in laparoscopic or robot-assisted procedures. Tactile sensors have been proposed to characterize and detect lumps in robot-assisted palpation. In this work, we compare the performance of a capacitive tactile sensor with that of the human finger. We evaluate the response of the sensor as it pertains to robot-assisted palpation and compare the sensor performance to that of human subjects performing an equivalent task on the same set of artificial tissue models. Furthermore, we investigate the effects of various tissue parameters (lump size, lump depth, and surrounding tissue stiffness) on the performance of both the human finger and the tactile sensor. Using signal detection theory for determining tactile sensor lump detection thresholds, the tactile sensor outperforms the human finger in a palpation task.
Abstract-We present a method for performing object recognition using multiple images acquired from a tactile sensor. The method relies on using the tactile sensor as an imaging device, and builds an object representation based on mosaics of tactile measurements. We then describe an algorithm that is able to recognize an object using a small number of tactile sensor readings. Our approach makes extensive use of sequential state estimation techniques from the mobile robotics literature, whereby we view the object recognition problem as one of estimating a consistent location within a set of object maps. We examine and test approaches based on both traditional particle filtering and histogram filtering. We demonstrate both the mapping and recognition / localization techniques on a set of raised letter shapes using real tactile sensor data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.