2010 IEEE International Conference on Robotics and Automation 2010
DOI: 10.1109/robot.2010.5509126
|View full text |Cite
|
Sign up to set email alerts
|

Refining grasp affordance models by experience

Abstract: Abstract-We present a method for learning object grasp affordance models in 3D from experience, and demonstrate its applicability through extensive testing and evaluation on a realistic and largely autonomous platform. Grasp affordance refers here to relative object-gripper configurations that yield stable grasps. These affordances are represented probabilistically with grasp densities, which correspond to continuous density functions defined on the space of 6D gripper poses. A grasp density characterizes an o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 26 publications
(25 citation statements)
references
References 16 publications
(37 reference statements)
0
25
0
Order By: Relevance
“…This OAC can be used to model reactive or affordance-based behaviours (see [42,30]) as outlined in Section 7.2.2. At an intermediate level in another grasp-related OAC (described in Section 7.3), grasp densities are used to hypothesise possible grasps when the agent has some object knowledge [44]. Finally, at the highest level, plans effectively use grasps to manipulate objects on an abstracted symbolic scene representation (see Section 7.4).…”
Section: Modularitymentioning
confidence: 99%
See 3 more Smart Citations
“…This OAC can be used to model reactive or affordance-based behaviours (see [42,30]) as outlined in Section 7.2.2. At an intermediate level in another grasp-related OAC (described in Section 7.3), grasp densities are used to hypothesise possible grasps when the agent has some object knowledge [44]. Finally, at the highest level, plans effectively use grasps to manipulate objects on an abstracted symbolic scene representation (see Section 7.4).…”
Section: Modularitymentioning
confidence: 99%
“…9 An object model includes a learnt, structural object model that represents geometric relations between 3D visual patches (i.e., early cognitive vision (ECV) features [54]) as Markov networks [55]. In addition, it contains a continuous representation of object-relative gripper poses that lead to successful grasps by means of grasp densities [44]. Object detection, pose estimation, and the determination of useful gripper poses for grasping the object are all done simultaneously using probabilistic inference within the Markov network, given a scene reconstruction in terms of ECV features.…”
Section: Definition Of Objgraspmentioning
confidence: 99%
See 2 more Smart Citations
“…Note that an agent would require more of such relations on different objects and behaviours to learn more general affordance relations and to conceptualize over its sensorimotor experiences. During the last decade, similar formalizations of affordances proved to be very practical with successful applications to domains such as navigation [15], manipulation [16,17,18,19,20], conceptualization and language [5,4], planning [18], imitation and emulation [12,18,4], tool use [21,22,13] and vision [4]. A notable one with a notion of affordances similar to ours is presented by Montesano et al [23,24].…”
Section: Related Studiesmentioning
confidence: 73%