2021 30th IEEE International Conference on Robot &Amp; Human Interactive Communication (RO-MAN) 2021
DOI: 10.1109/ro-man50785.2021.9515479
|View full text |Cite
|
Sign up to set email alerts
|

GraspME - Grasp Manifold Estimator

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…Such an encoder-decoder architecture is widely used to synthesize pixel-wise grasps [11,28,116,239,255]. Another similar formulation for pixel-wise grasp synthesis is called grasp manifolds, proposed by [88]. Since grasp map is more informative and could provide a global grasp affordance which indicates the grasp quality of the current viewpoints, it enables selection of the best view [116,239], provided the assumption that the camera is not fixed, which holds in most cases for robots.…”
Section: Pixel-level Grasp Map Synthesismentioning
confidence: 99%
“…Such an encoder-decoder architecture is widely used to synthesize pixel-wise grasps [11,28,116,239,255]. Another similar formulation for pixel-wise grasp synthesis is called grasp manifolds, proposed by [88]. Since grasp map is more informative and could provide a global grasp affordance which indicates the grasp quality of the current viewpoints, it enables selection of the best view [116,239], provided the assumption that the camera is not fixed, which holds in most cases for robots.…”
Section: Pixel-level Grasp Map Synthesismentioning
confidence: 99%
“…Contrary to methods classifying grasps, generative models can be trained to generate grasp poses from data [6] but might require additional sample refinement. While the generator in [40] considers possible collisions in the scene, [56] proposes to learn a grasp distribution over the object's manifold. [29] uses scene representation learning to learn grasp qualities and explicitly predict 3D rotations.…”
Section: Grasp and Motion Optimization On Real Robotsmentioning
confidence: 99%