Two experiments investigated the way acoustic markers of prominence influence the grouping of speech sequences by adults and 7-month-old infants. In the first experiment, adults were familiarized with and asked to memorize sequences of adjacent syllables that alternated in either pitch or duration. During the test phase, participants heard pairs of syllables with constant pitch and duration and were asked whether the syllables had appeared adjacently during familiarization. Adults were better at remembering pairs of syllables that during familiarization had short syllables preceding long syllables, or high-pitched syllables preceding low-pitched syllables. In the second experiment, infants were familiarized and tested with similar stimuli as in the first experiment, and their preference for pairs of syllables was accessed using the head-turn preference paradigm.When familiarized with syllables alternating in pitch, infants showed a preference to listen to pairs of syllables that had high pitch in the first syllable. However, no preference was found when the familiarization stream alternated in duration. It is proposed that these perceptual biases help infants and adults find linguistic units in the continuous speech stream.While the bias for grouping based on pitch appears early in development, biases for durational grouping might rely on more extensive linguistic experience.
Language acquisition involves both acquiring a set of words (i.e. the lexicon) and learning the rules that combine them to form sentences (i.e. syntax). Here, we show that consonants are mainly involved in word processing, whereas vowels are favored for extracting and generalizing structural relations. We demonstrate that such a division of labor between consonants and vowels plays a role in language acquisition. In two very similar experimental paradigms, we show that 12-month-old infants rely more on the consonantal tier when identifying words (Experiment 1), but are better at extracting and generalizing repetition-based srtuctures over the vocalic tier (Experiment 2). These results indicate that infants are able to exploit the functional differences between consonants and vowels at an age when they start acquiring the lexicon, and suggest that basic speech categories are assigned to different learning mechanisms that sustain early language acquisition.
Recent research has shown that specific areas of the human brain are activated by speech from the time of birth. However, it is currently unknown whether newborns' brains also encode and remember the sounds of words when processing speech. The present study investigates the type of information that newborns retain when they hear words and the brain structures that support word-sound recognition. Forty-four healthy newborns were tested with the functional near-infrared spectroscopy method to establish their ability to memorize the sound of a word and distinguish it from a phonetically similar one, 2 min after encoding. Right frontal regions-comparable to those activated in adults during retrieval of verbal materialshowed a characteristic neural signature of recognition when newborns listened to a test word that had the same vowel of a previously heard word. In contrast, a characteristic novelty response was found when a test word had different vowels than the familiar word, despite having the same consonants. These results indicate that the information carried by vowels is better recognized by newborns than the information carried by consonants. Moreover, these data suggest that right frontal areas may support the recognition of speech sequences from the very first stages of language acquisition.neonate's memory | right frontal lobe | sound encoding | speech perception | oxyhemoglobin P revious studies have shown that newborns and human fetuses are able to remember word sounds (1-3) as well as to extract prosodic properties of speech (4) or identity relations between syllables (5, 6). However, neither the specific elements newborns encode from speech, nor the brain structures that mediate speech recognition at birth have been precisely characterized. Building on a functional near-infrared spectroscopy (fNIRS) paradigm used to test memory in newborns (7), the present study asks whether the newborn can remember all of the sounds [consonants (C) and vowels (V)] that form a bisyllabic CVCV word, or whether some of these segments are better encoded than others.Judging by the number of studies reporting early abilities to discriminate fine phonetic contrasts (8), one might be inclined to ascribe to newborns a very detailed representation of the sound of words. In fact, newborns appear to discriminate all phonetic contrasts of the languages of the world, including those that their parents can no longer distinguish. Newborns distinguish consonants differing in one feature-for example, place of articulation, voicing, manner of articulation (9-11), duration (12)-as well as vowel quality contrasts (13,14). Do the representations newborns hold in memory contain the full range of segmental details suggested by these discrimination capacities?Different studies suggest that in adults (15-18), and in infants older than 12 mo (19-23), consonantal sequences are encoded more robustly than vocalic sequences for the representation of words. It is possible that a similar bias (namely, preference for consonantal information when encodi...
BackgroundThe capacity to memorize speech sounds is crucial for language acquisition. Newborn human infants can discriminate phonetic contrasts and extract rhythm, prosodic information, and simple regularities from speech. Yet, there is scarce evidence that infants can recognize common words from the surrounding language before four months of age.Methodology/Principal FindingsWe studied one hundred and twelve 1-5 day-old infants, using functional near-infrared spectroscopy (fNIRS). We found that newborns tested with a novel bisyllabic word show greater hemodynamic brain response than newborns tested with a familiar bisyllabic word. We showed that newborns recognize the familiar word after two minutes of silence or after hearing music, but not after hearing a different word.Conclusions/SignificanceThe data show that retroactive interference is an important cause of forgetting in the early stages of language acquisition. Moreover, because neonates forget words in the presence of some –but not all– sounds, the results indicate that the interference phenomenon that causes forgetting is selective.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.