Four experiments (E1–E2–E3–E4) investigated whether different acquisition modalities lead to the emergence of differences typically found between concrete and abstract words, as argued by the words as tools (WAT) proposal. To mimic the acquisition of concrete and abstract concepts, participants either manipulated novel objects or observed groups of objects interacting in novel ways (Training 1). In TEST 1 participants decided whether two elements belonged to the same category. Later they read the category labels (Training 2); labels could be accompanied by an explanation of their meaning. Then participants observed previously seen exemplars and other elements, and were asked which of them could be named with a given label (TEST 2). Across the experiments, it was more difficult to form abstract than concrete categories (TEST 1); even when adding labels, abstract words remained more difficult than concrete words (TEST 2). TEST 3 differed across the experiments. In E1 participants performed a feature production task. Crucially, the associations produced with the novel words reflected the pattern evoked by existing concrete and abstract words, as the first evoked more perceptual properties. In E2–E3–E4, TEST 3 consisted of a color verification task with manual/verbal (keyboard–microphone) responses. Results showed the microphone use to have an advantage over keyboard use for abstract words, especially in the explanation condition. This supports WAT: due to their acquisition modality, concrete words evoke more manual information; abstract words elicit more verbal information. This advantage was not present when linguistic information contrasted with perceptual one. Implications for theories and computational models of language grounding are discussed.
We show that complex visual tasks, such as position- and size-invariant shape recognition and navigation in the environment, can be tackled with simple architectures generated by a coevolutionary process of active vision and feature selection. Behavioral machines equipped with primitive vision systems and direct pathways between visual and motor neurons are evolved while they freely interact with their environments. We describe the application of this methodology in three sets of experiments, namely, shape discrimination, car driving, and robot navigation. We show that these systems develop sensitivity to a number of oriented, retinotopic, visual-feature-oriented edges, corners, height, and a behavioral repertoire to locate, bring, and keep these features in sensitive regions of the vision system, resembling strategies observed in simple insects.
Evolutionary robotics is a biologically inspired approach to robotics that is advantageous to studying the evolution of communication. A new model for the emergence of communication is developed and tested through various simulation experiments. In the first simulation, the emergence of simple signalling behaviour is studied. This is used to investigate the inter-relationships between communication abilities, namely linguistic production and comprehension, and other behavioural skills. The model supports the hypothesis that the ability to form categories from direct interaction with an environment constitutes the grounds for subsequent evolution of communication and language. In the second simulation, evolutionary robots are used to study the emergence of simple syntactic categories, e.g. action names (verbs). Comparisons between the two simulations indicate that the signalling lexicon emerged in the first simulation follows the evolutionary pattern of nouns, as observed in related models on the evolution of syntactic categories. Results also support the language-origin hypothesis on the fact that nouns precede verbs in both phylogenesis and ontogenesis. Further extensions of this new evolutionary robotic model for testing hypotheses on language origins are also discussed.
This paper presents a cognitive robotics model for the study of the embodied representation of action words. The present research will present how an iCub humanoid robot can learn the meaning of action words (i.e. words that represent dynamical events that happen in time) by physically interacting with the environment and linking the effects of its own actions with the behavior observed on the objects before and after the action. The control system of the robot is an artificial neural network trained to manipulate an object through a Back-Propagation-Through-Time algorithm. We will show that in the presented model the grounding of action words relies directly to the way in which an agent interacts with the environment and manipulates it.
Abstract. In this paper we describe how a population of simulated robots evolved for the ability to solve a collective navigation problem develop individual and social/communication skills. In particular, we analyze the evolutionary origins of motor and signaling behaviors. Obtained results indicate that signals and the meaning of the signals produced by evolved robots are grounded not only on the robots sensory-motor system but also on robots' behavioral capabilities previously acquired. Moreover, the analysis of the co-evolution of robots individual and communicative abilities indicate how innovation in the former might create the adaptive basis for further innovations in the latter and vice versa.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.