2022
DOI: 10.48550/arxiv.2207.05053
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Continuous Grasping Function with a Dexterous Hand from Human Demonstrations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…However, less attention has been paid to hand-object interaction representations for high-level generalization with dexterous hands. For point cloud inputs, features such as position, distance, or/and a global feature vector extracted by PointNet [5] describe the relationship between hands and objects [6], [7], [8]; for input with RGB images, a global feature vector extracted from CNN like ResNet [9] is used to capture the relation [10]. Though able to capture the geometry of objects, global features typically have difficulty in generalizing to new data.…”
Section: Introductionmentioning
confidence: 99%
“…However, less attention has been paid to hand-object interaction representations for high-level generalization with dexterous hands. For point cloud inputs, features such as position, distance, or/and a global feature vector extracted by PointNet [5] describe the relationship between hands and objects [6], [7], [8]; for input with RGB images, a global feature vector extracted from CNN like ResNet [9] is used to capture the relation [10]. Though able to capture the geometry of objects, global features typically have difficulty in generalizing to new data.…”
Section: Introductionmentioning
confidence: 99%