2023
DOI: 10.1126/scirobotics.adc9244
|View full text |Cite
|
Sign up to set email alerts
|

Visual dexterity: In-hand reorientation of novel and complex object shapes

Tao Chen,
Megha Tippur,
Siyang Wu
et al.

Abstract: In-hand object reorientation is necessary for performing many dexterous manipulation tasks, such as tool use in less structured environments, which remain beyond the reach of current robots. Prior works built reorientation systems assuming one or many of the following conditions: reorienting only specific objects with simple shapes, limited range of reorientation, slow or quasi-static manipulation, simulation-only results, the need for specialized and costly sensor suites, and other constraints that make the s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 53 publications
0
7
0
Order By: Relevance
“…RL algorithms have been successfully demonstrated for inhand manipulation [9][10][11][21][22][23] tasks. One key difference between these and our work is that in in-hand manipulation, the object typically starts in close proximity to the robot, whereas in our domain, we must solve the additional problem of reaching and making contact with the object.…”
Section: B Reinforcement Learning For Contact-rich Tasksmentioning
confidence: 99%
See 2 more Smart Citations
“…RL algorithms have been successfully demonstrated for inhand manipulation [9][10][11][21][22][23] tasks. One key difference between these and our work is that in in-hand manipulation, the object typically starts in close proximity to the robot, whereas in our domain, we must solve the additional problem of reaching and making contact with the object.…”
Section: B Reinforcement Learning For Contact-rich Tasksmentioning
confidence: 99%
“…Like the recent approaches for contact-rich tasks such as in-hand manipulation [9][10][11] and locomotion [12,13], we use a simulator to train a policy and then transfer it to the real world. In this scheme, there are two primary challenges that we need to address for non-prehensile manipulation: exploration and the sim-to-real gap.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Secondly, imitation can be used to transfer knowledge between policies, also known as policy distillation (PD) [18]. A specific formulation of this framework has recently proven useful in the context of robot learning [1], [2], wherein a teacher policy is trained from privileged, low-dimensional observations and then distilled into a visual student policy. The distillation from teacher to student is performed using DAgger [19].…”
Section: Integrating Imitation With Reinforcement Learningmentioning
confidence: 99%
“…observations pushes computational and sample complexities beyond currently feasible limits. To train visual policies more efficiently, policy distillation (PD) has recently been used to transfer knowledge from policies trained on low-dimensional states to high-dimensional visual observations [1], [2]. However, differences between the characteristics of these observation spaces and resulting optimal behaviors have not yet been considered.…”
Section: Introductionmentioning
confidence: 99%