2011
DOI: 10.1177/0278364911410459
|View full text |Cite
|
Sign up to set email alerts
|

Learning the semantics of object–action relations by observation

Abstract: Recognizing manipulations performed by a human and the transfer and execution of this by a robot is a difficult problem. We address this in the current study by introducing a novel representation of the relations between objects at decisive time points during a manipulation. Thereby, we encode the essential changes in a visual scenery in a condensed way such that a robot can recognize and learn a manipulation without prior object knowledge. To achieve this we continuously track image segments in the video and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
133
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 148 publications
(134 citation statements)
references
References 43 publications
(66 reference statements)
1
133
0
Order By: Relevance
“…There is some recent work in interpreting human actions and interaction with objects [25,1,17] in context of learning to perform actions from demonstrations. Lopes et al [25] use context from objects in terms of possible grasp a↵ordances to focus the attention of their recognition system.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…There is some recent work in interpreting human actions and interaction with objects [25,1,17] in context of learning to perform actions from demonstrations. Lopes et al [25] use context from objects in terms of possible grasp a↵ordances to focus the attention of their recognition system.…”
Section: Related Workmentioning
confidence: 99%
“…Lopes et al [25] use context from objects in terms of possible grasp a↵ordances to focus the attention of their recognition system. Aksoy et al [1] construct a dynamic graph sequence of tracked image segments from human demonstrations and this representation is used by the robot for manipulating objects. A↵ordances have also been used in planning (e.g., [26,43]).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Aldoma et al [2] proposed a method to find affordances which depends solely on the objects of interest and their position and orientation in the scene. There is some recent work in interpreting human actions and interaction with objects [26,1,20] in context of learning to perform actions from demonstrations. Lopes et al [26] use context from objects in terms of possible grasp affordances to focus the attention of their recognition system.…”
Section: Related Workmentioning
confidence: 99%
“…For evaluating the quality of the predicted temporal affordance, we compute the modified Hausdorff distance (MHD) as a physical measure of the distance between the predicted object motion trajectories and the true object trajectory from the test data. 1 Baseline Algorithms. We compare our method against the following baselines: 1) Chance: It selects a random training instance for the given human intention and uses its affordances as the predictions.…”
Section: Generating Physically-grounded Affordancesmentioning
confidence: 99%