2008 7th IEEE International Conference on Development and Learning 2008
DOI: 10.1109/devlrn.2008.4640811
|View full text |Cite
|
Sign up to set email alerts
|

Detecting the functional similarities between tools using a hierarchical representation of outcomes

Abstract: Abstract-The ability to reason about multiple tools and their functional similarities is a prerequisite for intelligent tool use. This paper presents a model which allows a robot to detect the similarity between tools based on the environmental outcomes observed with each tool. To do this, the robot incrementally learns an adaptive hierarchical representation (i.e., a taxonomy) for the types of environmental changes that it can induce and detect with each tool. Using the learned taxonomies, the robot can infer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
42
0
1

Year Published

2010
2010
2021
2021

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 59 publications
(43 citation statements)
references
References 10 publications
0
42
0
1
Order By: Relevance
“…) grasping planar faces of object M SELF S 9 (Carvalho & Nolfi, 2016) traversability depth, haptic M SELF S 10 (Castellini et al, 2011) grasping SIFT BoW, contact joints M S B 11 (Çelikkanat et al, 2015) pushing, grasping, throwing, shaking depth, haptic, proprioceptive and audio M SEMI RR 12 (Chan et al, 2014) grasping pose, action-object relation M U RR 13 (Chang, 2015) cutting, painting edges, TSSC N S RR 14 (Chen et al, 2015) traversability RGB images, motor controls M S S 15 (Chu et al, 2016a) (Sinapov & Stoytchev, 2007) pulling, dragging changes in raw pixels M SELF S 113 (Sinapov & Stoytchev, 2008) pulling, dragging raw pixels, trajectories M SELF S 114 (Song et al, 2016) …”
mentioning
confidence: 99%
“…) grasping planar faces of object M SELF S 9 (Carvalho & Nolfi, 2016) traversability depth, haptic M SELF S 10 (Castellini et al, 2011) grasping SIFT BoW, contact joints M S B 11 (Çelikkanat et al, 2015) pushing, grasping, throwing, shaking depth, haptic, proprioceptive and audio M SEMI RR 12 (Chan et al, 2014) grasping pose, action-object relation M U RR 13 (Chang, 2015) cutting, painting edges, TSSC N S RR 14 (Chen et al, 2015) traversability RGB images, motor controls M S S 15 (Chu et al, 2016a) (Sinapov & Stoytchev, 2007) pulling, dragging changes in raw pixels M SELF S 113 (Sinapov & Stoytchev, 2008) pulling, dragging raw pixels, trajectories M SELF S 114 (Song et al, 2016) …”
mentioning
confidence: 99%
“…Note that an agent would require more of such relations on different objects and behaviours to learn more general affordance relations and to conceptualize over its sensorimotor experiences. During the last decade, similar formalizations of affordances proved to be very practical with successful applications to domains such as navigation [15], manipulation [16,17,18,19,20], conceptualization and language [5,4], planning [18], imitation and emulation [12,18,4], tool use [21,22,13] and vision [4]. A notable one with a notion of affordances similar to ours is presented by Montesano et al [23,24].…”
Section: Related Studiesmentioning
confidence: 72%
“…For example, to learn the affordances of a tool, the methods described in [21] and [22] assume that the robot's sensorimotor data is cleanly partitioned according to the identity of each tool. Similarly, when categorizing objects as either containers or non-containers, the robot in [23] started with the implicit assumption that it already knows the identities of all objects that it has to interact with.…”
Section: B Roboticsmentioning
confidence: 99%