Two studies examined relationships between infants' early speech processing performance and later language and cognitive outcomes. Study 1 found that performance on speech segmentation tasks before 12 months of age related to expressive vocabulary at 24 months. However, performance on other tasks was not related to 2-year vocabulary. Study 2 assessed linguistic and cognitive skills at 4-6 years of age for children who had participated in segmentation studies as infants. Children who had been able to segment words from fluent speech scored higher on language measures, but not general IQ, as preschoolers. Results suggest that speech segmentation ability is an important prerequisite for successful language development, and they offer potential for developing measures to detect language impairment at an earlier age.
The effect of talker and token variability on speech perception has engendered a great deal of research. However, most of this research has compared listener performance in multiple-talker (or variable) situations to performance in single-talker conditions. It remains unclear to what extent listeners are affected by the degree of variability within a talker, rather than simply the existence of variability (being in a multitalker environment). The present study has two goals: First, the degree of variability among speakers in their /s/ and /S/ productions was measured. Even among a relatively small pool of talkers, there was a range of speech variability: some talkers had /s/ and /S/ categories that were quite distinct from one another in terms of frication centroid and skewness, while other speakers had categories that actually overlapped one another. The second goal was to examine whether this degree of variability within a talker influenced perception. Listeners were presented with natural /s/ and /S/ tokens for identification, under ideal listening conditions, and slower response times were found for speakers whose productions were more variable than for speakers with more internal consistency in their speech. This suggests that the degree of variability, not just the existence of it, may be the more critical factor in perception.
A series of studies was undertaken to examine how rate normalization in speech perception would be influenced by the similarity, duration, and phonotactics of phonemes that were adjacent or distal from the initial, target phoneme. The duration of the adjacent (following) phoneme always had an effect on perception ofthe initial target. Neither phonotactics nor acoustic similarity seemed to have any influence on this rate normalization effect. However, effects of the duration of the nonadjacent (distal) phoneme were only found when that phoneme was temporally close to the target. These results suggest that there is a temporal window over which rate normalization occurs. In most cases, only the adjacent phoneme or adjacent two phonemes will fall within this window and thus influence perception of a phoneme distinction.One of the fundamental issues in speech perception research involves the apparent lack of invariance between the acoustic signal and the listener's perception. Listeners somehow manage to perceive messages correctly, despite the variability in the acoustic signal caused by changes in speaking rate, talkers, and dialect. Researchers often have tried to examine each of these issues separately, in the hope that they would later be able to combine their findings into one theory.One of the sources of variability in the acoustic signal is the rate at which a person speaks. People do not talk at a constant rate, and certain phonemes change substantially in duration as speaking rate changes (Crystal & House, 1982, 1990Miller, Grosjean, & Lomanto, 1984; or see Miller, 1981, for a review of earlier work). In addition, talkers differ in their intrinsic rate ofspeech (see Crystal & House, 1988d), and some dialects either lengthen sounds or shorten them. The issue ofrate change is especially important because some phonemic contrasts are cued, in whole or in part, by their duration. For instance, the fbl-/wl manner contrast can be cued by differences in duration alone, with shorter initial transitions being heard as more "b-like" and longer transitions as more "w-like" (Liberman, Delattre, Gerstman, & Cooper, 1956;Miller & Liberman, 1979). However, when we listen to someone who talks very quickly, we still hear Iwl phonemes: they do not all sound like stops. Conversely, when we listen to someone who speaks very slowly, intended fb/s do not all sound like Iw/s. Miller and Baer (1983) analyzed the transition This research was supported by NIDCD Grant RO 1-DC00219 to SUNY at Buffalo and by a National Science Foundation Graduate Fellowship to the first author. Some of these data were previously presented at the 123rd meeting of the Acoustical Society of America, May 1992, in Salt Lake City, and at the 124th meeting of the Acoustical Society of America, October 1992, in New Orleans. Comments may be sent to either author at the Department of Psychology, Park Hall, SUNY at Buffalo, Buffalo, NY 14260 (e-mail: rochelle@art.fss.buffalo.edu).durations for Ibal and Iwal and found that for a given speaking rate, Iwl transition...
In 4 studies, 7.5‐month‐olds used synchronized visual–auditory correlations to separate a target speech stream when a distractor passage was presented at equal loudness. Infants succeeded in a segmentation task (using the head‐turn preference procedure with video familiarization) when a video of the talker's face was synchronized with the target passage (Experiment 1, N=30). Infants did not succeed in this task when an unsynchronized (Experiment 2, N=30) or static (Experiment 3, N=30) face was presented during familiarization. Infants also succeeded when viewing a synchronized oscilloscope pattern (Experiment 4, N=26), suggesting that their ability to use visual information is related to domain‐general sensitivities to any synchronized auditory–visual correspondence.
A B S T R A C TBoth the input directed to the child, and the child's ability to process that input, are likely to impact the child's language acquisition. We explore how these factors inter-relate by tracking the relationships among: (a) lexical properties of maternal child-directed speech to prelinguistic (-month-old) infants (N = ); (b) these infants' abilities to segment lexical targets from conversational child-directed utterances in an experimental paradigm; and (c) the children's vocabulary outcomes at age ;. Both repetitiveness in maternal input and the child's speech segmentation skills at age ; predicted language outcomes at ;; moreover, while these factors were somewhat inter-related, they each had independent effects on toddler vocabulary skill, and there was no interaction between the two. I N T R O D U C T I O NA great deal of research (summarized briefly below) has explored how the amount/nature of child-directed speech (CDS) might influence children's [*]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.