Humans maintain a body image of themselves, which plays a central role in controlling bodily movement, planning action, recognising and naming actions performed by others, and requesting or executing commands. This paper explores through experiments with autonomous humanoid robots how such a body image could form. Robots play a situated embodied language game called the Action Game in which they ask each other to perform bodily actions. They start without any prior inventory of names, without categories for visually recognising body movements of others, and without knowing the relation between visual images of motor behaviours carried out by others and their own motor behaviours. Through diagnostic and repair strategies carried out within the context of action games, they progressively self-organise an effective lexicon as well as bi-directional mappings between the visual and the motor domain. The agents thus establish and continuously adapt networks linking perception, body representation, action and language.
In this series:1. Steels, Luc. The Talking Heads Experiment: Origins of words and meanings.2. Vogt, Paul. How mobile robots can self-organize a vocabulary.3. Bleys, Joris. Language strategies for the domain of colour.4. van Trijp, Remi. The evolution of case grammar.5. Spranger, Michael. The evolution of grounded spatial language.
This paper discusses grounded acquisition experiments of increasing complexity. Humanoid robots acquire English spatial lexicons from robot tutors. We identify how various spatial language systems, such as projective, absolute and proximal can be learned. The proposed learning mechanisms do not rely on direct meaning transfer or direct access to world models of interlocutors. Finally, we show how multiple systems can be acquired at the same time.
How can we explain the enormous amount of creativity and flexibility in spatial language use? In this paper we detail computational experiments that try to capture the essence of this puzzle. We hypothesize that flexible semantics which allow agents to conceptualize reality in many different ways are key to this issue. We will introduce our particular semantic modeling approach as well as the coupling of conceptual structures to the language system. We will justify the approach and show how these systems play together in the evolution of spatial language using humanoid robots.
Abstract-This paper explores how the absence of an expected sensor reading can be used to improve Markov localization. This negative information usually is not being used in localization, because it yields less information than positive information (i.e. sensing a landmark), and a sensor often fails to detect a landmark, even if it falls within its sensing range. We address these difficulties by carefully modeling the sensor to avoid false negatives. This can also be thought of as adding an additional sensor that detects the absence of an expected landmark. We show how such modeling is done and how it is integrated into Markov localization. In real world experiments, we demonstrate that a robot is able to localize in positions where otherwise it could not and quantify our findings using the entropy of the particle distribution. Exploiting negative information leads to a greatly improved localization performance and reactivity.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.