2021
DOI: 10.1109/lra.2021.3062560
|View full text |Cite
|
Sign up to set email alerts
|

An Affordance Keypoint Detection Network for Robot Manipulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 30 publications
(26 citation statements)
references
References 35 publications
0
26
0
Order By: Relevance
“…Likewise, identifying a means to translate the 2D grasp representation to a full 3D grasp pose would remove the need for a top-down grasp and permit richer manipulation from more varied viewpoints. Recent work on affordances and keypoints (Xu et al, 2021) indicates that keypoints should work well for recovering SE(3) grasp frames. Lastly, introducing grasp quality neural networks (Mahler et al, 2017; Morrison et al, 2019) would further resolve the what question for cluttered scenarios.…”
Section: Discussionmentioning
confidence: 99%
“…Likewise, identifying a means to translate the 2D grasp representation to a full 3D grasp pose would remove the need for a top-down grasp and permit richer manipulation from more varied viewpoints. Recent work on affordances and keypoints (Xu et al, 2021) indicates that keypoints should work well for recovering SE(3) grasp frames. Lastly, introducing grasp quality neural networks (Mahler et al, 2017; Morrison et al, 2019) would further resolve the what question for cluttered scenarios.…”
Section: Discussionmentioning
confidence: 99%
“…That improves the relationship between grasping and objects though it cannot generate diverse grasp candidates automatically. On the basis of traditional pixel-wise part segmentation, [9] introduced an extra keypoint detection module, whose predictions consists of position, direction, and extent, guiding a more stable grasp pose. However, the problem of these pixel-based part affordance methods is that 6-DoF grasp detection is hard to embed in it, causing generated robotic grasp candidates in a very restricted workspace.…”
Section: Related Work Deep Visualmentioning
confidence: 99%
“…Rather than using different object parts as different affordance representations [9], [14], [16] , we annotate all successful grasps of selected objects with different affordance labels. For example, grasp affordance in the mug instance means all successful grasps around the mug handle while pour affordance means all successful grasps around the upper mug rim.…”
Section: A Affordance Grasp Dataset Constructionmentioning
confidence: 99%
“…Task-Relevant Grasping -Task-relevant grasping requires the grasps to be compatible with downstream manipulation tasks. Prior works have developed frameworks to predict affordance segmentation [7], [27]- [31] or keypoints [32] over the observed image or point cloud. This, however, often assumes manually annotated real world data is available to perform supervised training [33], [34], which is costly and time-consuming to obtain.…”
Section: Novel Unseen Objectsmentioning
confidence: 99%