Robotics: Science and Systems IX 2013
DOI: 10.15607/rss.2013.ix.039
|View full text |Cite
|
Sign up to set email alerts
|

Perceiving, Learning, and Exploiting Object Affordances for Autonomous Pile Manipulation

Abstract: Abstract-Autonomous manipulation in unstructured environments presents roboticists with three fundamental challenges: object segmentation, action selection, and motion generation. These challenges become more pronounced when unknown manmade or natural objects are cluttered together in a pile. We present an end-to-end approach to the problem of manipulating unknown objects in a pile, with the objective of removing all objects from the pile and placing them into a bin. Our robot perceives the environment with an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 35 publications
(29 citation statements)
references
References 4 publications
0
29
0
Order By: Relevance
“…21(c)). Extensive experiments demonstrated the effectiveness and safety of our compliant grasping primitives in cluttered environments (please see [18,19] for more results and details).…”
Section: Object Release Experimentsmentioning
confidence: 99%
“…21(c)). Extensive experiments demonstrated the effectiveness and safety of our compliant grasping primitives in cluttered environments (please see [18,19] for more results and details).…”
Section: Object Release Experimentsmentioning
confidence: 99%
“…A鈫祇rdances have been widely used in robotics for obtaining a functional understanding of the scene as well as enabling robots to interact and manipulate objects. These works range from predicting opportunities for interaction with an object by using only visual cues [40,9,2] to observing e鈫礶cts of exploratory behaviors [31,36,30,10,13]. For instance, Sun et al [40] proposed a probabilistic graphical model that leverages visual object categorization for learning a鈫祇rdances.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, Sun et al [40] proposed a probabilistic graphical model that leverages visual object categorization for learning a鈫祇rdances. Katz et al [13] propose a framework for learning to manipulate objects in clutter by choosing robot actions based on object a鈫祇rdances.…”
Section: Related Workmentioning
confidence: 99%
“…Some other works consider human robot collaborations without anticipating human activities [1], focus on high-level actions [31], or consider object affordances for manipulation [16]. These works are orthogonal to ours.…”
Section: Related Workmentioning
confidence: 99%