Do Ss compare multidigit numbers digit by digit (symbolic model) or do they compute the whole magnitude of the numbers before comparing them (holistic model)? In 4 experiments of timed 2-digit number comparisons with a fixed standard, the findings of Hinrichs, Yurko, and Hu (1981) were extended with French Ss. Reaction times (RTs) decreased with target-standard distance, with discontinuities at the boundaries of the standard's decade appearing only with standards 55 and 66 but not with 65. The data are compatible with the holistic model. A symbolic interference model that posits the simul~meous comparison of decades and units can also account for the results. To separate the 2 models, the decades and units digits of target numbers were presented asynchronously in Experiment 4. Contrary to the prediction of the interference model, presenting the units before the decades did not change the influence of units on RTs. Pros and cons of the holistic model are discusseft.
Spoken languages have been classified by linguists according to their rhythmic properties, and psycholinguists have relied on this classification to account for infants' capacity to discriminate languages. Although researchers have measured many speech signal properties, they have failed to identify reliable acoustic characteristics for language classes. This paper presents instrumental measurements based on a consonant/vowel segmentation for eight languages. The measurements suggest that intuitive rhythm types reflect specific phonological properties, which in turn are signaled by the acoustic/phonetic properties of speech. The data support the notion of rhythm classes and also allow the simulation of infant language discrimination, consistent with the hypothesis that newborns rely on a coarse segmentation of speech. A hypothesis is proposed regarding the role of rhythm perception in language acquisition.
Three experimentsinvestigatedthe ability of French newbornsto discriminatebetween sets of sentences in different foreign languages. The sentences were low-pass filtered to reduce segmental information while sparing prosodic information. Infants discriminated between stress-timed English and mora-timed Japanese (Ellperiment 1) but failed to discriminate between stress-timedEnglish and stress-timedDutch (Experiment2). In Experiment 3, infants heard different combinations of sentences from English, Dutch, Spanish, and Italian. Discrimination was observed only when English and Dutch sentences were contrasted with Spanish and Italian sentences. These results suggest that newborns use prosodic and, more specifically,rhythmic information to classify utterances into broad language classes defined according to global rhythmic properties. Implications of this for the acquisition of the rhythmic properties of the native language are discussed.
Learning a language requires both statistical computations to identify words in speech and algebraic-like computations to discover higher level (grammatical) structure. Here we show that these computations can be influenced by subtle cues in the speech signal. After a short familiarization to a continuous speech stream, adult listeners are able to segment it using powerful statistics, but they fail to extract the structural regularities included in the stream even when the familiarization is greatly extended. With the introduction of subliminal segmentation cues, however, these regularities can be rapidly captured.
Does the neonate's brain have left hemisphere (LH) dominance for speech? Twelve full-term neonates participated in an optical topography study designed to assess whether the neonate brain responds specifically to linguistic stimuli. Participants were tested with normal infant-directed speech, with the same utterances played in reverse and without auditory stimulation. We used a 24-channel optical topography device to assess changes in the concentration of total hemoglobin in response to auditory stimulation in 12 areas of the right hemisphere and 12 areas of the LH. We found that LH temporal areas showed significantly more activation when infants were exposed to normal speech than to backward speech or silence. We conclude that neonates are born with an LH superiority to process specific properties of speech.T wo models attempt to account for the origin of the lefthemisphere (LH) dominance for speech. The first model assumes that, at birth, the LH displays superiority in processing all acoustic signals (1). The second postulates that neonates are endowed with specific structures to processes speech signals in the LH (2). Both models assume an LH superiority at birth. However, only the second model postulates that the LH superiority is specific for speech and that it may be intrinsically related to the emergence of the language faculty. During development, the infant's brain grows and matures, and its functional organization changes, even if its gross anatomy displays striking similarities to that of the adult (3) from the start. The association of language with the LH may arise as a consequence of language acquisition, or, alternatively, this association may reflect an innate disposition of certain areas of the brain for language. Several behavioral studies have focused on this issue. One study (4) measured foot-kicking responses and observed behaviors that suggested an LH superiority for speech stimuli as compared with other auditory stimuli only hours after birth. Another study reported a right ear advantage for speech stimuli with 3-monthold infants by using the orienting response (5). An additional study used the nonnutritive sucking response to test 2-week-old infants and found a right ear advantage for speech but not for other auditory stimuli (6). A recent study with older infants reports that, as soon as babbling sets in, the mouth tends to rise toward the right side of the face, suggesting an underlying LH superiority. This asymmetry is absent during nonlinguistic vocal gestures (7). These behavioral studies suggest that speech stimuli presented to prelinguistic infants result in greater LH involvement and that an LH superiority is apparent as soon as the first language-like productions begin. Nonetheless, behavioral methods have limitations: neonates often fail to complete the tests because of fussing or crying. The advent of brain-imaging techniques has made it possible to test young infants even when they fail to make overt responses. Furthermore, imaging methods link behavioral observations to their...
Children exposed to bilingual input typically learn 2 languages without obvious difficulties. However, it is unclear how preverbal infants cope with the inconsistent input and how bilingualism affects early development. In 3 eye-tracking studies we show that 7-month-old infants, raised with 2 languages from birth, display improved cognitive control abilities compared with matched monolinguals. Whereas both monolinguals and bilinguals learned to respond to a speech or visual cue to anticipate a reward on one side of a screen, only bilinguals succeeded in redirecting their anticipatory looks when the cue began signaling the reward on the opposite side. Bilingual infants rapidly suppressed their looks to the first location and learned the new response. These findings show that processing representations from 2 languages leads to a domain-general enhancement of the cognitive control system well before the onset of speech.cognitive development ͉ early bilingualism ͉ executive functions ͉ eyetracking ͉ infant cognition ''
What are the origins of the efficient language learning abilities that allow humans to acquire their mother tongue in just a few years very early in life? Although previous studies have identified different mechanisms underlying the acquisition of auditory and speech patterns in older infants and adults, the earliest sensitivities remain unexplored. To address this issue, we investigated the ability of newborns to learn simple repetition-based structures in two optical brain-imaging experiments. In the first experiment, 22 neonates listened to syllable sequences containing immediate repetitions (ABB; e.g., ''mubaba,'' ''penana''), intermixed with random control sequences (ABC; e.g., ''mubage,'' ''penaku''). We found increased responses to the repetition sequences in the temporal and left frontal areas, indicating that the newborn brain differentiated the two patterns. The repetition sequences evoked greater activation than the random sequences during the first few trials, suggesting the presence of an automatic perceptual mechanism to detect repetitions. In addition, over the subsequent trials, activation increased further in response to the repetition sequences but not in response to the random sequences, indicating that recognition of the ABB pattern was enhanced by repeated exposure. In the second experiment, in which nonadjacent repetitions (ABA; e.g., ''bamuba,'' ''napena'') were contrasted with the same random controls, no discrimination was observed. These findings suggest that newborns are sensitive to certain input configurations in the auditory domain, a perceptual ability that might facilitate later language development. language acquisition ͉ newborns ͉ optical imaging ͉ perceptual primitives ͉ speech perception
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.