Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2021
DOI: 10.48550/arxiv.2105.11283
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Coarse-to-Fine for Sim-to-Real: Sub-Millimetre Precision Across Wide Task Spaces

Abstract: When training control policies for robot manipulation via deep learning, sim-to-real transfer can help satisfy the large data requirements. In this paper, we study the problem of zero-shot sim-to-real when the task requires both highly precise control, with sub-millimetre error tolerance, and full workspace generalisation. Our framework involves a coarse-tofine controller, where trajectories initially begin with classical motion planning based on pose estimation, and transition to an end-to-end controller whic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 24 publications
(42 reference statements)
0
1
0
Order By: Relevance
“…A wide range of works have focused on algorithmic development for end-to-end learning of vision-based object manipulation skills Agrawal et al, 2016;Kalashnikov et al, 2018;Srinivas et al, 2018;Zhu et al, 2018;Jayaraman et al, 2018;Rafailov et al, 2021). Some works on learned visuomotor control use eye-in-hand cameras for tasks such as grasping (Song et al, 2020) and insertion Puang et al, 2020;Luo et al, 2021;Valassakis et al, 2021), and others which pre-date end-to-end visuomotor learning use both eye-in-hand and third-person cameras for visual servoing (Flandin et al, 2000;Lippiello et al, 2005). Very few works consider the design of camera placements (Zaky et al, 2020) or conduct any controlled comparisons on different combinations of visual perspectives (Zhan et al, 2020;Mandlekar et al, 2021;Wu et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…A wide range of works have focused on algorithmic development for end-to-end learning of vision-based object manipulation skills Agrawal et al, 2016;Kalashnikov et al, 2018;Srinivas et al, 2018;Zhu et al, 2018;Jayaraman et al, 2018;Rafailov et al, 2021). Some works on learned visuomotor control use eye-in-hand cameras for tasks such as grasping (Song et al, 2020) and insertion Puang et al, 2020;Luo et al, 2021;Valassakis et al, 2021), and others which pre-date end-to-end visuomotor learning use both eye-in-hand and third-person cameras for visual servoing (Flandin et al, 2000;Lippiello et al, 2005). Very few works consider the design of camera placements (Zaky et al, 2020) or conduct any controlled comparisons on different combinations of visual perspectives (Zhan et al, 2020;Mandlekar et al, 2021;Wu et al, 2021).…”
Section: Related Workmentioning
confidence: 99%