Robotics: Science and Systems XIX 2023
DOI: 10.15607/rss.2023.xix.018
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Visuo-Tactile Pretraining to Locate and Follow Garment Features

Justin Kerr,
Huang Huang,
Albert Wilcox
et al.

Abstract: Humans make extensive use of vision and touch as complementary senses, with vision providing global information about the scene and touch measuring local information during manipulation without suffering from occlusions. While prior work demonstrates the efficacy of tactile sensing for precise manipulation of deformables, they typically rely on supervised, human-labeled datasets. We propose Self-Supervised Visuo-Tactile Pretraining (SSVTP), a framework for learning multi-task visuo-tactile representations in a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
references
References 39 publications
0
0
0
Order By: Relevance