2019
DOI: 10.1109/lra.2019.2894439
|View full text |Cite
|
Sign up to set email alerts
|

Learning Affordance Segmentation for Real-World Robotic Manipulation via Synthetic Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(14 citation statements)
references
References 35 publications
0
14
0
Order By: Relevance
“…With recent advances in deep learning, many efforts resort to semantic part segmentation [6], [7] or keypoint detection [8] via supervised learning on manually labeled real-world data. A line of research is circumventing the complicated data collection process by training in simulation [9], [10].…”
Section: Novel Unseen Objectsmentioning
confidence: 99%
See 1 more Smart Citation
“…With recent advances in deep learning, many efforts resort to semantic part segmentation [6], [7] or keypoint detection [8] via supervised learning on manually labeled real-world data. A line of research is circumventing the complicated data collection process by training in simulation [9], [10].…”
Section: Novel Unseen Objectsmentioning
confidence: 99%
“…This, however, often assumes manually annotated real world data is available to perform supervised training [33], [34], which is costly and time-consuming to obtain. While [6], [35] alleviates the problem via sim-to-real transfer, it still requires manual specification of semantic parts on 3D models for generating synthetic affordance labels. Instead, another line of research [9], [10], [36] proposed to learn semantic tool manipulation via self-interaction in simulation.…”
Section: Novel Unseen Objectsmentioning
confidence: 99%
“…Stoytchev [15] introduces an approach to ground tool affordances via dynamically applying different behaviors from a behavioral repertoire. [9], [10], [14], [16], [17] use convolutional neural networks (CNNs) to detect regions of affordance in an image. Ruiz and Mayol-Cuevas [11] predict affordance candidate locations in environments via the interaction tensor.…”
Section: Related Workmentioning
confidence: 99%
“…Antanas et al [13] encode suitable grasp regions based on probabilistic logic descriptions of tasks and segmented object parts. Works on object affordance detection [17], [18], [19], [20], [21] have leveraged the affordances of object parts to define the correspondences between affordances and grasp types (e.g., rim grasp for parts with contain or scoop affordance.) Detry et al [22] built on these works by training a separate detection model using data generated by predefined heuristics to detect suitable grasp regions for each task.…”
Section: A Semantic Graspingmentioning
confidence: 99%
“…Sawatzky et al [25] tried to reduce the cost of labeling with weakly supervised learning. Chu et al [18] explored transferring from systhetic data to real environment with unsupervised domain adaptation. We are the first to leverage part affordances to learn semantic grasping from data.…”
Section: B Affordance Detectionmentioning
confidence: 99%