The work reported here experimentally investigates a striking generalization about vocabulary acquisition: Noun learning is superior to verb learning in the earliest moments of child language development. The dominant explanation of this phenomenon in the literature invokes differing conceptual requirements for items in these lexical categories: Verbs are cognitively more complex than nouns and so their acquisition must await certain mental developments in the infant. In the present work, we investigate an alternative hypothesis; namely, that it is the information requirements of verb learning, not the conceptual requirements, that crucially determine the acquisition order. Efficient verb learning requires access to structural features of the exposure language and thus cannot take place until a scaffolding of noun knowledge enables the acquisition of clause-level syntax. More generally, we experimentally investigate the hypothesis that vocabulary acquisition takes place via an incremental constraint-satisfaction procedure that bootstraps itself into successively more sophisticated linguistic representations which, in turn, enable new kinds of vocabulary learning. If the experimental subjects were young children, it would be difficult to distinguish between this information-centered hypothesis and the conceptual change hypothesis. Therefore the experimental "learners" are adults. The items to be "acquired" in the experiments were the 24 most frequent nouns and 24 most frequent verbs from a sample of maternal speech to 18-24-month-old infants. The various experiments ask about the kinds of information that will support identification of these words as they occur in mother-to-child discourse. Both the proportion correctly identified and the type of word that is identifiable changes significantly as a function of information type. We discuss these results as consistent with the incremental construction of a highly lexicalized grammar by cognitively and pragmatically sophisticated human infants, but inconsistent with a procedure in which lexical acquisition is independent of and antecedent to syntax acquisition.
Two experiments are reported which examine how manipulations of visual attention affect speakers' linguistic choices regarding word order, verb use and syntactic structure when describing simple pictured scenes. Experiment 1 presented participants with scenes designed to elicit the use of a perspective predicate (The man chases the dog/The dog flees from the man) or a conjoined noun phrase sentential Subject (A cat and a dog/A dog and a cat). Gaze was directed to a particular scene character by way of an attention-capture manipulation. Attention capture increased the likelihood that this character would be the sentential Subject and altered the choice of perspective verb or word order within conjoined NP Subjects accordingly. Experiment 2 extended these results to word order choice within Active versus Passive structures (The girl is kicking the boy/The boy is being kicked by the girl) and symmetrical predicates (The girl is meeting the boy/The boy is meeting the girl). Experiment 2 also found that early endogenous shifts in attention influence word order choices. These findings indicate a reliable relationship between initial looking patterns and speaking patterns, reflecting considerable parallelism between the on-line apprehension of events and the on-line construction of descriptive utterances.
No abstract
We report three eyetracking experiments that examine the learning procedure used by adults as they pair novel words and visually presented referents over a sequence of referentially ambiguous trials. Successful learning under such conditions has been argued to be the product of a learning procedure in which participants provisionally pair each novel word with several possible referents and use a statistical-associative learning mechanism to gradually converge on a single mapping across learning instances. We argue here that successful learning in this setting is instead the product of a one-trial procedure in which a single hypothesized word-referent pairing is retained across learning instances, abandoned only if the subsequent instance fails to confirm the pairing – more a ‘fast mapping’ procedure than a gradual statistical one. We provide experimental evidence for this Propose-but-Verify learning procedure via three experiments in which adult participants attempted to learn the meanings of nonce words cross-situationally under varying degrees of referential uncertainty. The findings, using both explicit (referent selection) and implicit (eye movement) measures, show that even in these artificial learning contexts, which are far simpler than those encountered by a language learner in a natural environment, participants do not retain multiple meaning hypotheses across learning instances. As we discuss, these findings challenge ‘gradualist’ accounts of word learning and are consistent with the known rapid course of vocabulary learning in a first language.
Three experiments explored how words are learned from hearing them across contexts. Adults watched 40-s videotaped vignettes of parents uttering target words (in sentences) to their infants. Videos were muted except for a beep or nonsense word inserted where each "mystery word" was uttered. Participants were to identify the word. Exp. 1 demonstrated that most (90%) of these natural learning instances are quite uninformative, whereas a small minority (7%) are highly informative, as indexed by participants' identification accuracy. Preschoolers showed similar information sensitivity in a shorter experimental version. Two further experiments explored how cross-situational information helps, by manipulating the serial ordering of highly informative vignettes in five contexts. Response patterns revealed a learning procedure in which only a single meaning is hypothesized and retained across learning instances, unless disconfirmed. Neither alternative hypothesized meanings nor details of past learning situations were retained. These findings challenge current models of cross-situational learning which assert that multiple meaning hypotheses are stored and cross-tabulated via statistical procedures. Learners appear to use a one-trial "fast-mapping" procedure, even under conditions of referential uncertainty.acquisition | induction | language | vocabulary F undamental for each child entering the human community is the acquisition of word meanings: discovering which language sounds map onto which interpretations. Because these mappings are arbitrary and vary cross-linguistically, growing a vocabulary poses a classic learning problem for humans, both infant learners of a first language and second-language learners who must replace the original mappings with a new set. This experiencedependent learning problem for humans contrasts with animal communication systems in which the interpretations of speciesspecific barks, chirps, and growls are largely given for free by nature. The present article provides experimental evidence concerning the primitive initial procedure by which humans acquire vocabulary items.A common assumption is that form-to-meaning mappings are discovered in a process mediated by observation of extralinguistic events: The learner matches recurrent speech events to recurrent aspects of the observed world. For example, when an English speaker says "dog" or a French speaker says "chien," there is likely to be a co-occurring dog sighting. Young children often acquire a word's meaning after a single such exposure to its use in context (1), particularly if there is strong pragmatic support (2) or a restrictive syntactic environment (3). The sheer size of the average vocabulary at age 6 y [estimated at 6,000-8,000 words (1)] suggests that this "fast mapping" of a sound segment onto its interpretation must happen very often, as is also attested in many laboratory studies (4,5).Yet the world of words and their contexts is enormously complex. Few words are taught systematically, even in middleclass environments with...
This paper investigates possible influences of the lexical resources of individual languages on the spatial organization and reasoning styles of their users. That there are such powerful and pervasive influences of language on thought is the thesis of the Whorf-Sapir linguistic relativity hypothesis which, after a lengthy period in intellectual limbo, has recently returned to prominence in the anthropological, linguistic, and psycholinguistic literatures. Our point of departure is an influential group of cross-linguistic studies that appear to show that spatial reasoning is strongly affected by the spatial lexicon in everyday use in a community (e.g. Brown, P., & Levinson, S. C. (1993). Linguistic and nonlinguistic coding of spatial arrays: explorations in Mayan cognition (Working Paper No. 24). Nijmegen: Cognitive Anthropology Research Group, Max Planck Institute for Psycholinguistics; Cognitive Linguistics 6 (1995) 33). Specifically, certain groups customarily use an externally referenced spatial-coordinate system to refer to nearby directions and positions ("to the north") whereas English speakers usually employ a viewer-perspective system ("to the left"). Prior findings and interpretations have been to the effect that users of these two types of spatial system solve rotation problems in different ways, reasoning strategies imposed by habitual use of the language-particular lexicons themselves. The present studies reproduce these different problem-solving strategies in speakers of a single language (English) by manipulating landmark cues, suggesting that language itself may not be the key causal factor in choice of spatial perspective. Prior evidence on rotation problem solution from infants (e.g. Acredolo, L.P. (1979). Laboratory versus home: the effect of environment on the 9-month-old infant's choice of spatial reference system. Developmental Psychology, 15 (6), 666-667) and from laboratory animals (e.g. Restle, F. (1975). Discrimination of cues in mazes: a resolution of the place-vs.-response question. Psychological Review, 64, 217-228) suggests a unified interpretation of the findings: creatures approach spatial problems differently depending on the availability and suitability of local landmark cues. The results are discussed in terms of the current debate on the relation of language to thought, with particular emphasis on the question of why different cultural communities favor different perspectives in talking about space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.