Perceptual attunement to one's native language results in language-specific processing of speech sounds. This includes stress cues, instantiated by differences in intensity, pitch, and duration. The present study investigates the effects of linguistic experience on the perception of these cues by studying the Iambic-Trochaic Law (ITL), which states that listeners group sounds trochaically (strong-weak) if the sounds vary in loudness or pitch and iambically (weak-strong) if they vary in duration. Participants were native listeners either of French or German; this comparison was chosen because French adults have been shown to be less sensitive than speakers of German and other languages to word-level stress, which is communicated by variation in cues such as intensity, fundamental frequency (F0), or duration. In experiment 1, participants listened to sequences of co-articulated syllables varying in either intensity or duration. The German participants were more consistent in their grouping than the French for both cues. Experiment 2 was identical to experiment 1 except that intensity variation was replaced by pitch variation. German participants again showed more consistency for both cues, and French participants showed especially inconsistent grouping for the pitch-varied sequences. These experiments show that the perception of linguistic rhythm is strongly influenced by linguistic experience.
Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non-speech sounds. In this study, we investigated rhythmic perception of non-linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of pauses) was manipulated. In this task, participants grouped sequences of auditory chimeras formed from musical instruments. These chimeras mimic the complexity of speech without being speech. We found that, while showing the same overall grouping preferences, the German speakers showed stronger biases than the French speakers in grouping complex sequences. Sound variability reduced all participants' biases, resulting in the French group showing no grouping preference for the most variable sequences, though this reduction was attenuated by musical experience. In sum, this study demonstrates that linguistic experience, musical experience, and complexity affect rhythmic grouping of non-linguistic sounds and suggests that experience with acoustic cues in a meaningful context (language or music) is necessary for developing a robust grouping preference that survives acoustic variability.
Rhythm perception is assumed to be guided by a domain-general auditory principle, the Iambic/Trochaic Law, stating that sounds varying in intensity are grouped as strong-weak, and sounds varying in duration are grouped as weak-strong. Recently, Bhatara et al. (2013) showed that rhythmic grouping is influenced by native language experience, French listeners having weaker grouping preferences than German listeners. This study explores whether L2 knowledge and musical experience also affect rhythmic grouping. In a grouping task, French late learners of German listened to sequences of coarticulated syllables varying in either intensity or duration. Data on their language and musical experience were obtained by a questionnaire. Mixed-effect model comparisons showed influences of musical experience as well as L2 input quality and quantity on grouping preferences. These results imply that adult French listeners’ sensitivity to rhythm can be enhanced through L2 and musical experience.
More than 30 years have passed since Mehler and colleagues (1988) proposed that newborns can discriminate between languages that belong to different rhythm classes: stress-, syllable-or mora-timed. Thereupon they developed the hypothesis that infants are sensitive to differences in vowel and consonant interval durations as acoustic correlates of rhythm classes. It remains unknown exactly which durational computations infants use when perceiving speech for the purposes of distinguishing languages. Here, a meta-analysis of studies on infants' language discrimination skills over the first year of life was conducted, aiming to quantify how language discrimination skills change with age and are modulated by rhythm classes or durational metrics. A systematic literature search identified 42 studies that tested infants' (birth to 12 months) discrimination or preference of two language varieties, by presenting infants with auditory or audio-visual continuous speech.Quantitative data synthesis was conducted using multivariate random effects meta-analytic models with the factors rhythm class difference, age, stimulus manipulation, method, and metrics operationalising proportions of and variability in vowel and consonant interval durations, to explore which factors best account for language discrimination or preference.Results revealed that smaller differences in vowel interval variability (△V) and larger differences in successive consonantal interval variability (rPVI-C) were associated with more successful language discrimination, and better accounted for discrimination results than the factor rhythm class. There were no effects of age for discrimination but results on preference studies were affected by age: the older infants get, the more they prefer nonnative languages that are rhythmically similar to their native language, but not non-native languages that are rhythmically distinct. These findings can inform theories on language discrimination that have previously focussed on rhythm class, by providing a novel way to LANGUAGE DISCRIMINATION IN INFANCY 4
Recent studies have suggested that musical rhythm perception ability can affect the phonological system. The most prevalent causal account for developmental dyslexia is the phonological deficit hypothesis. As rhythm is a subpart of phonology, we hypothesized that reading deficits in dyslexia are associated with rhythm processing in speech and in music. In a rhythmic grouping task, adults with diagnosed dyslexia and age-matched controls listened to speech streams with syllables alternating in intensity, duration, or neither, and indicated whether they perceived a strong-weak or weak-strong rhythm pattern. Additionally, their reading and musical rhythm abilities were measured. Results showed that adults with dyslexia had lower musical rhythm abilities than adults without dyslexia. Moreover, lower musical rhythm ability was associated with lower reading ability in dyslexia. However, speech grouping by adults with dyslexia was not impaired when musical rhythm perception ability was controlled: like adults without dyslexia, they showed consistent preferences. However, rhythmic grouping was predicted by musical rhythm perception ability, irrespective of dyslexia. The results suggest associations among musical rhythm perception ability, speech rhythm perception, and reading ability. This highlights the importance of considering individual variability to better understand dyslexia and raises the possibility that musical rhythm perception ability is a key to phonological and reading acquisition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.