2005
DOI: 10.1007/11521082_13
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Interpret Pointing Gestures: Experiments with Four-Legged Autonomous Robots

Abstract: This paper explores the hypothesis that pointing gesture recognition can be learned using a reward based system. An experiment with two four-legged robots is presented. One of the robots takes the role of an adult and is pointing to an object, the other robot, the learner, has to interpret the pointing gesture correctly in order to find the object. We discuss the results of this experiment in relation to possible developmental scenarios about how children learn to interpret pointing gestures.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2005
2005
2016
2016

Publication Types

Select...
4
4
2

Relationship

1
9

Authors

Journals

citations
Cited by 19 publications
(11 citation statements)
references
References 15 publications
0
10
0
Order By: Relevance
“…Joint attention with robots has been investigated, e.g., by Hafner and Kaplan [5]. They used simple edge-based features to recognize pointing gestures of Aibo dogs in a specific setting.…”
Section: Related Workmentioning
confidence: 99%
“…Joint attention with robots has been investigated, e.g., by Hafner and Kaplan [5]. They used simple edge-based features to recognize pointing gestures of Aibo dogs in a specific setting.…”
Section: Related Workmentioning
confidence: 99%
“…Several of these projects have involved machine learning. Examples are robotic clicker training [56], experiments on neural learning for pointing gestures between two robots [43], or studies in curiosity-driven developmental robotics [55], [86], [87]. Researchers at Sony implemented AIBO's behaviour control architecture using probabilistic state machines whose probabilities are modified using reinforcement learning through interaction with the user [34].…”
Section: Machine Learning On Aibos In Generalmentioning
confidence: 99%
“…Marjanovic et al [5] introduced a motor-vision mapping system that learns to perform pointing motions towards visual targets, Marjanovic's system has an explicit goal of pointing, this behavior does not emerge as a side effect of other developmental processes occurring at the same time. Hafner and Kaplan [6] applied multi-layer-perceptron network to train a robot to interpret pointing gestures of another robot, the robot's function is limited, because they used a fixed topological structure network to implement their learning system. Shademan et al [7] applied a locally least squares based Jacobian estimation method to build a robotic visual-motor learning system, which is lack of developmental perspective, merely a matrix calculation.…”
Section: Introductionmentioning
confidence: 99%