2022
DOI: 10.48550/arxiv.2204.02320
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Generalizable Dexterous Manipulation from Human Grasp Affordance

Abstract: indicates equal contributions.ric representation learning objective using our demonstrations. By experimenting with relocating diverse objects in simulation, we show that our approach outperforms baselines with a large margin when manipulating novel objects. We also ablate the importance on 3D object representation learning for manipulation. We include videos, code, and additional information on the project websitehttps://kristery.github.io/ILAD/.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
12
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(12 citation statements)
references
References 40 publications
0
12
0
Order By: Relevance
“…For dexterous object manipulation, using natural hand poses has a significant impact on the success of natural object grasping. Several previous studies have conducted dexterous object manipulation with hand pose estimation and DRL [32,41]. Yueh-Hua Wu et al [32] used GraspCVAE and DRL to manipulate objects with an ADROIT robot hand.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…For dexterous object manipulation, using natural hand poses has a significant impact on the success of natural object grasping. Several previous studies have conducted dexterous object manipulation with hand pose estimation and DRL [32,41]. Yueh-Hua Wu et al [32] used GraspCVAE and DRL to manipulate objects with an ADROIT robot hand.…”
Section: Discussionmentioning
confidence: 99%
“…Several previous studies have conducted dexterous object manipulation with hand pose estimation and DRL [32,41]. Yueh-Hua Wu et al [32] used GraspCVAE and DRL to manipulate objects with an ADROIT robot hand. GraspCVAE is based on a variational autoencoder that estimates the natural hand pose from the object affordance.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…DAPG [1] exploits human demonstrations collected in virtual reality to improve sample efficiency of DRL in a high-dimensional 30 DoFs hand space. Following the same human demonstration strategy, ILAD [2], GRAFF [3], and DexVIP [4] extends the demonstrations into larger scale trajectories or grasping poses collected from human interaction with real daily objects. GRAFF [3] leverages contacts from hand-object interactions for agents to learn to approach objects more effectively.…”
Section: Introductionmentioning
confidence: 99%
“…DexVIP [4] learns human pose priors from videos and imposes these priors into DRL by incorporating auxiliary reward functions favoring robot poses similar to the human ones in videos. ILAD [2] trains a generator to synthesize grasping trajectories with largescale demonstrations instead of using human demonstrations directly.…”
Section: Introductionmentioning
confidence: 99%