Two experiments investigated the way acoustic markers of prominence influence the grouping of speech sequences by adults and 7-month-old infants. In the first experiment, adults were familiarized with and asked to memorize sequences of adjacent syllables that alternated in either pitch or duration. During the test phase, participants heard pairs of syllables with constant pitch and duration and were asked whether the syllables had appeared adjacently during familiarization. Adults were better at remembering pairs of syllables that during familiarization had short syllables preceding long syllables, or high-pitched syllables preceding low-pitched syllables. In the second experiment, infants were familiarized and tested with similar stimuli as in the first experiment, and their preference for pairs of syllables was accessed using the head-turn preference paradigm.When familiarized with syllables alternating in pitch, infants showed a preference to listen to pairs of syllables that had high pitch in the first syllable. However, no preference was found when the familiarization stream alternated in duration. It is proposed that these perceptual biases help infants and adults find linguistic units in the continuous speech stream.While the bias for grouping based on pitch appears early in development, biases for durational grouping might rely on more extensive linguistic experience.
Language acquisition involves both acquiring a set of words (i.e. the lexicon) and learning the rules that combine them to form sentences (i.e. syntax). Here, we show that consonants are mainly involved in word processing, whereas vowels are favored for extracting and generalizing structural relations. We demonstrate that such a division of labor between consonants and vowels plays a role in language acquisition. In two very similar experimental paradigms, we show that 12-month-old infants rely more on the consonantal tier when identifying words (Experiment 1), but are better at extracting and generalizing repetition-based srtuctures over the vocalic tier (Experiment 2). These results indicate that infants are able to exploit the functional differences between consonants and vowels at an age when they start acquiring the lexicon, and suggest that basic speech categories are assigned to different learning mechanisms that sustain early language acquisition.
Recent research has shown that specific areas of the human brain are activated by speech from the time of birth. However, it is currently unknown whether newborns' brains also encode and remember the sounds of words when processing speech. The present study investigates the type of information that newborns retain when they hear words and the brain structures that support word-sound recognition. Forty-four healthy newborns were tested with the functional near-infrared spectroscopy method to establish their ability to memorize the sound of a word and distinguish it from a phonetically similar one, 2 min after encoding. Right frontal regions-comparable to those activated in adults during retrieval of verbal materialshowed a characteristic neural signature of recognition when newborns listened to a test word that had the same vowel of a previously heard word. In contrast, a characteristic novelty response was found when a test word had different vowels than the familiar word, despite having the same consonants. These results indicate that the information carried by vowels is better recognized by newborns than the information carried by consonants. Moreover, these data suggest that right frontal areas may support the recognition of speech sequences from the very first stages of language acquisition.neonate's memory | right frontal lobe | sound encoding | speech perception | oxyhemoglobin P revious studies have shown that newborns and human fetuses are able to remember word sounds (1-3) as well as to extract prosodic properties of speech (4) or identity relations between syllables (5, 6). However, neither the specific elements newborns encode from speech, nor the brain structures that mediate speech recognition at birth have been precisely characterized. Building on a functional near-infrared spectroscopy (fNIRS) paradigm used to test memory in newborns (7), the present study asks whether the newborn can remember all of the sounds [consonants (C) and vowels (V)] that form a bisyllabic CVCV word, or whether some of these segments are better encoded than others.Judging by the number of studies reporting early abilities to discriminate fine phonetic contrasts (8), one might be inclined to ascribe to newborns a very detailed representation of the sound of words. In fact, newborns appear to discriminate all phonetic contrasts of the languages of the world, including those that their parents can no longer distinguish. Newborns distinguish consonants differing in one feature-for example, place of articulation, voicing, manner of articulation (9-11), duration (12)-as well as vowel quality contrasts (13,14). Do the representations newborns hold in memory contain the full range of segmental details suggested by these discrimination capacities?Different studies suggest that in adults (15-18), and in infants older than 12 mo (19-23), consonantal sequences are encoded more robustly than vocalic sequences for the representation of words. It is possible that a similar bias (namely, preference for consonantal information when encodi...
BackgroundThe capacity to memorize speech sounds is crucial for language acquisition. Newborn human infants can discriminate phonetic contrasts and extract rhythm, prosodic information, and simple regularities from speech. Yet, there is scarce evidence that infants can recognize common words from the surrounding language before four months of age.Methodology/Principal FindingsWe studied one hundred and twelve 1-5 day-old infants, using functional near-infrared spectroscopy (fNIRS). We found that newborns tested with a novel bisyllabic word show greater hemodynamic brain response than newborns tested with a familiar bisyllabic word. We showed that newborns recognize the familiar word after two minutes of silence or after hearing music, but not after hearing a different word.Conclusions/SignificanceThe data show that retroactive interference is an important cause of forgetting in the early stages of language acquisition. Moreover, because neonates forget words in the presence of some –but not all– sounds, the results indicate that the interference phenomenon that causes forgetting is selective.
The evolution of human languages is driven both by primitive biases present in the human sensorimotor systems and by cultural transmission among speakers. However, whether the design of the language faculty is further shaped by linguistic biological biases remains controversial. To address this question, we used near-infrared spectroscopy to examine whether the brain activity of neonates is sensitive to a putatively universal phonological constraint. Across languages, syllables like blif are preferred to both lbif and bdif. Newborn infants (2-5 d old) listening to these three types of syllables displayed distinct hemodynamic responses in temporal-perisylvian areas of their left hemisphere. Moreover, the oxyhemoglobin concentration changes elicited by a syllable type mirrored both the degree of its preference across languages and behavioral linguistic preferences documented experimentally in adulthood. These findings suggest that humans possess early, experience-independent, linguistic biases concerning syllable structure that shape language perception and acquisition.human newborns | speech perception | NIRS | sonority | phonology
Anomia, a word-finding difficulty, is a frequent consequence of poststroke linguistic disturbance, associated with fluent and nonfluent aphasia that needs long-term specific and intensive speech rehabilitation. The present study explored the feasibility of telerehabilitation as compared to a conventional face-to-face treatment of naming, in patients with poststroke anomia. Five aphasic chronic patients participated in this study characterized by: strictly controlled crossover design; well-balanced lists of words in picture-naming tasks where progressive phonological cues were provided; same kind of the treatment in the two ways of administration. ANOVA was used to compare naming accuracy in the two types of treatment, at three time points: baseline, after treatment, and followup. The results revealed no main effect of treatment type (P = 0.844) indicating that face-to-face and tele-treatment yielded comparable results. Moreover, there was a significant main effect of time (P = 0.0004) due to a better performance immediately after treatment and in the followup when comparing them to baseline. These preliminary results show the feasibility of teletreatment applied to lexical deficits in chronic stroke patients, extending previous work on telerehabilitation and opening new vistas for future studies on teletreatment of language functions.
In language, the relative order of words in sentences carries important grammatical functions. However, the developmental origins and the neural correlates of the ability to track word order are to date poorly understood. The current study therefore investigates the origins of infants' ability to learn about the sequential order of words, using near-infrared spectroscopy (NIRS) with newborn infants. We have conducted two experiments: one in which a word order change was implemented in 4-word sequences recorded with a list intonation (as if each word was a separate item in a list; list prosody condition, Experiment 1) and one in which the same 4-word sequences were recorded with a well-formed utterance-level prosodic contour (utterance prosody condition, Experiment 2). We found that newborns could detect the violation of the word order in the list prosody condition, but not in the utterance prosody condition. These results suggest that while newborns are already sensitive to word order in linguistic sequences, prosody appears to be a stronger cue than word order for the identification of linguistic units at birth.
The aim of this study was to build an instrument, the numerical activities of daily living (NADL), designed to identify the specific impairments in numerical functions that may cause problems in everyday life. These impairments go beyond what can be inferred from the available scales evaluating activities of daily living in general, and are not adequately captured by measures of the general deterioration of cognitive functions as assessed by standard clinical instruments like the MMSE and MoCA. We assessed a control group (n = 148) and a patient group affected by a wide variety of neurological conditions (n = 175), with NADL along with IADL, MMSE, and MoCA. The NADL battery was found to have satisfactory construct validity and reliability, across a wide age range. This enabled us to calculate appropriate criteria for impairment that took into account age and education. It was found that neurological patients tended to overestimate their abilities as compared to the judgment made by their caregivers, assessed with objective tests of numerical abilities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.