We address how listeners perceive temporal regularity in music performances, which are rich in temporal irregularities. A computational model is described in which a small system of internal self‐sustained oscillations, operating at different periods with specific phase and period relations, entrains to the rhythms of music performances. Based on temporal expectancies embodied by the oscillations, the model predicts the categorization of temporally changing event intervals into discrete metrical categories, as well as the perceptual salience of deviations from these categories. The model's predictions are tested in two experiments using piano performances of the same music with different phrase structure interpretations (Experiment 1) or different melodic interpretations (Experiment 2). The model successfully tracked temporal regularity amidst the temporal fluctuations found in the performances. The model's sensitivity to performed deviations from its temporal expectations compared favorably with the performers' structural (phrasal and melodic) intentions. Furthermore, the model tracked normal performances (with increased temporal variability) better than performances in which temporal fluctuations associated with individual voices were removed (with decreased variability). The small, systematic temporal irregularities characteristic of human performances (chord asynchronies) improved tracking, but randomly generated temporal irregularities did not. These findings suggest that perception of temporal regularity in complex musical sequences is based on temporal expectancies that adapt in response to temporally fluctuating input.
Important individual differences are observed in people’s abilities to synchronize their body movements with regular auditory rhythms. We investigate whether synchronizing with a regular auditory cue is affected by each person’s spontaneous production rate (SPR) and by hearing a partner’s synchronization in a social context. Musically trained and untrained participants synchronized their tapping with an auditory cue presented at different rates (their own SPR or their partner’s SPR) and in a Solo or Joint (turn-taking) condition. Linear and nonlinear oscillator models were fit to participants’ mean asynchronies (signed timing differences between the cued onsets and taps). In Joint turn-taking, participants’ synchrony was increased when the auditory signal was cued at the participant’s own SPR, compared with their partner’s SPR; in contrast, synchronization did not differ across rates in the Solo condition. Asynchronies in the Joint task became larger as the difference between partners’ spontaneous rates increased; the increased asynchronies were driven by the faster partner who did not slow down to match the rate of their slower partner. Nonlinear delay-coupled models (with time delay, coupling strength, and intrinsic frequency) outperformed linear models (intrinsic frequency only) in accounting for tappers’ synchronization adjustments. The nonlinear model’s coupling value increased for musically trained participants, relative to untrained participants. Overall, these findings suggest that both intrinsic differences in partners’ spontaneous rates and the social turn-taking context contribute to the range of synchrony in the general population. Delay-coupled models are capable of capturing the wide range of individual differences in auditory-motor synchronization.
Three transfer-of-learning experiments were conducted to investigate performers' ability to generalize knowledge of specific temporal structure and motor movements from one melody to another. Skilled pianists performed one melody during 10 training trials and another melody during 4 test trials, under speeded performance conditions. In Experiment 1, the meter and/or motor movements (hand and finger assignments) were altered from training to test melodies; in Experiment 2, the rhythm and/ or motor movements were altered; in Experiment 3, the meter and/or rhythm were altered. Differences in total melody duration from training to test were smaller when meter, rhythm, or motor variables were retained across sequences. Furthermore, the same variables of meter, rhythm, and motor movements influenced the tempo of each performance. These findings support distinct temporal and motor representations underlying performance of simple melodies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.