There is growing evidence that motor and speech disorders co-occur during development. In the present study, we investigated whether stuttering, a developmental speech disorder, is associated with a predictive timing deficit in childhood and adolescence. By testing sensorimotor synchronization abilities, we aimed to assess whether predictive timing is dysfunctional in young participants who stutter (8–16 years). Twenty German children and adolescents who stutter and 43 non-stuttering participants matched for age and musical training were tested on their ability to synchronize their finger taps with periodic tone sequences and with a musical beat. Forty percent of children and 90% of adolescents who stutter displayed poor synchronization with both metronome and musical stimuli, falling below 2.5% of the estimated population based on the performance of the group without the disorder. Synchronization deficits were characterized by either lower synchronization accuracy or lower consistency or both. Lower accuracy resulted in an over-anticipation of the pacing event in participants who stutter. Moreover, individual profiles revealed that lower consistency was typical of participants that were severely stuttering. These findings support the idea that malfunctioning predictive timing during auditory–motor coupling plays a role in stuttering in children and adolescents.
Repetition can boost memory and perception. However, repeating the same stimulus several times in immediate succession also induces intriguing perceptual transformations and illusions. Here, we investigate the Speech to Song Transformation (S2ST), a massed repetition effect in the auditory modality, which crosses the boundaries between language and music. In the S2ST, a phrase repeated several times shifts to being heard as sung. To better understand this unique cross-domain transformation, we examined the perceptual determinants of the S2ST, in particular the role of acoustics. In 2 Experiments, the effects of 2 pitch properties and 3 rhythmic properties on the probability and speed of occurrence of the transformation were examined. Results showed that both pitch and rhythmic properties are key features fostering the transformation. However, some properties proved to be more conducive to the S2ST than others. Stable tonal targets that allowed for the perception of a musical melody led more often and quickly to the S2ST than scalar intervals. Recurring durational contrasts arising from segmental grouping favoring a metrical interpretation of the stimulus also facilitated the S2ST. This was, however, not the case for a regular beat structure within and across repetitions. In addition, individual perceptual abilities allowed to predict the likelihood of the S2ST. Overall, the study demonstrated that repetition enables listeners to reinterpret specific prosodic features of spoken utterances in terms of musical structures. The findings underline a tight link between language and music, but they also reveal important differences in communicative functions of prosodic structure in the 2 domains.
Musical rhythm positively impacts on subsequent speech processing. However, the neural mechanisms underlying this phenomenon are so far unclear. We investigated whether carryover effects from a preceding musical cue to a speech stimulus result from a continuation of neural phase entrainment to periodicities that are present in both music and speech. Participants listened and memorized French metrical sentences that contained (quasi-)periodic recurrences of accents and syllables. Speech stimuli were preceded by a rhythmically regular or irregular musical cue. Our results show that the presence of a regular cue modulates neural response as estimated by EEG power spectral density, intertrial coherence, and source analyses at critical frequencies during speech processing compared with the irregular condition. Importantly, intertrial coherences for regular cues were indicative of the participants' success in memorizing the subsequent speech stimuli. These findings underscore the highly adaptive nature of neural phase entrainment across fundamentally different auditory stimuli. They also support current models of neural phase entrainment as a tool of predictive timing and attentional selection across cognitive domains.
Why does human speech have rhythm? As we cannot travel back in time to witness how speech developed its rhythmic properties and why humans have the cognitive skills to process them, we rely on alternative methods to find out. One powerful tool is the comparative approach: studying the presence or absence of cognitive/behavioral traits in other species to determine which traits are shared between species and which are recent human inventions. Vocalizations of many species exhibit temporal structure, but little is known about how these rhythmic structures evolved, are perceived and produced, their biological and developmental bases, and communicative functions. We review the literature on rhythm in speech and animal vocalizations as a first step toward understanding similarities and differences across species. We extend this review to quantitative techniques that are useful for computing rhythmic structure in acoustic sequences and hence facilitate cross‐species research. We report links between vocal perception and motor coordination and the differentiation of rhythm based on hierarchical temporal structure. While still far from a complete cross‐species perspective of speech rhythm, our review puts some pieces of the puzzle together.
In their everyday communication, parents do not only speak but also sing with their infants. However, it remains unclear whether infants' can discriminate speech from song or prefer one over the other. The present study examined the ability of 6- to 10-month-old infants (N = 66) from English-speaking households in London, Ontario, Canada to discriminate between auditory stimuli of native Russian-speaking and native English-speaking mothers speaking or singing to their infants. Infants listened significantly longer to the sung stimuli compared to the spoken stimuli. This is the first study to demonstrate that, even in the absence of other multimodal cues, infant listeners are able to discriminate between sung and spoken stimuli, and furthermore, prefer to listen to sung stimuli over spoken stimuli.
Rhythmic properties of speech and language have been a matter of long-standing debates, with both traditional production and perception studies delivering controversial findings. The present study examines the possibility of investigating linguistic rhythm using movementbased paradigms. Informed by the theory and methods of sensorimotor synchronization, we developed two finger-tapping tasks (synchronization and reproduction), and tested them with English participants. The synchronization task required participants to tap along with the beat of a looped sentence while the reproduction task asked them to tap out the perceived beat patterns after listening to a sentence loop. The results showed that both tasks engaged participants in period tracking of a beat-like structure in the linguistic stimuli, though synchronization did so to a greater extent. Patterns obtained in the reproduction task tended to converge toward participants' spontaneous tapping rates and showed a degree of regularization. Data collected in the synchronization task displayed a consistent anchoring of taps with the vowel onsets. Overall, synchronization performance with language resembled many well-established findings of sensorimotor synchronization with metronome and music. We conclude that our setting of the sensorimotor synchronization paradigm-finger tapping along with looped spoken phrases-is a valid experimentation tool for studying rhythm perception in language.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.