2019 International Conference on Robotics and Automation (ICRA) 2019
DOI: 10.1109/icra.2019.8794285
|View full text |Cite
|
Sign up to set email alerts
|

Deep Visuo-Tactile Learning: Estimation of Tactile Properties from Images

Abstract: Estimation of tactile properties from vision, such as slipperiness or roughness, is important to effectively interact with the environment. These tactile properties help us decide which actions we should choose and how to perform them. E.g., we can drive slower if we see that we have bad traction or grasp tighter if an item looks slippery. We believe that this ability also helps robots to enhance their understanding of the environment, and thus enables them to tailor their actions to the situation at hand. We … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 52 publications
(29 citation statements)
references
References 29 publications
0
28
0
Order By: Relevance
“…In ref. [113], estimation of tactile properties based on visual perception has been modelled, where the data are obtained based on using a webcam and uSkin tactile sensor located at the Sawyer robot's end-effector. The purpose of this modality is to increase the perception of the robot to be more aware of the contacting environment of the object while grasping.…”
Section: Multi-sensor Controlmentioning
confidence: 99%
“…In ref. [113], estimation of tactile properties based on visual perception has been modelled, where the data are obtained based on using a webcam and uSkin tactile sensor located at the Sawyer robot's end-effector. The purpose of this modality is to increase the perception of the robot to be more aware of the contacting environment of the object while grasping.…”
Section: Multi-sensor Controlmentioning
confidence: 99%
“…Three major challenges in creating a clear elastomer are the long curing time of about six to seven hours [48], quality consistency [49], and the formation of air bubbles within a gel that needs vacuum pump for degassing [48]- [51]. Aside from these challenges in creating clear elastomer in the lab, according to [52], the GelSight might have an impressive spatial resolution (30-100 microns), but the elastomer can be easily damaged during grasping thus requires frequent [42], [45], (h) GelSlim (2018): A compact design with slant mirror inside and skin fabric gel covering. LED: only white.…”
Section: A Retrographic Sensormentioning
confidence: 99%
“…The maximum covariance analysis was used to pair the learned features to learn the cross‐modal visual–tactile shared representation. Takahashi and Tan [128] and Gandarias et al [129] recently studied how to obtain tactile features from optical images based on self‐encoding networks and CNNs and how to use them to enhance the tactile perception capability. Falco et al [130] established an active exploration framework to realize cross‐modal visual–tactile object recognition.…”
Section: Embodied Tactile Learningmentioning
confidence: 99%