Evidence that audition dominates vision in temporal processing has come from perceptual judgment tasks. This study shows that this auditory dominance extends to the largely subconscious processes involved in sensorimotor coordination. Participants tapped their finger in synchrony with auditory and visual sequences containing an event onset shift (EOS), expected to elicit an involuntary phase correction response (PCR), and also tried to detect the EOS. Sequences were presented in unimodal and bimodal conditions, including one in which auditory and visual EOSs of opposite sign coincided. Unimodal results showed greater variability of taps, smaller PCRs, and poorer EOS detection in vision than in audition. In bimodal conditions, variability of taps was similar to that for unimodal auditory sequences, and PCRs depended more on auditory than on visual information, even though attention was always focused on the visual sequences.
People often move in synchrony with auditory rhythms (e.g., music), whereas synchronization of movement with purely visual rhythms is rare. In two experiments, this apparent attraction of movement to auditory rhythms was investigated by requiring participants to tap their index finger in synchrony with an isochronous auditory (tone) or visual (flashing light) target sequence while a distractor sequence was presented in the other modality at one of various phase relationships. The obtained asynchronies and their variability showed that auditory distractors strongly attracted participants' taps, whereas visual distractors had much weaker effects, if any. This asymmetry held regardless of the spatial congruence or relative salience of the stimuli in the two modalities. When different irregular timing patterns were imposed on target and distractor sequences, participants' taps tended to track the timing pattern of auditory distractor sequences when they were approximately in phase with visual target sequences, but not the reverse. These results confirm that rhythmic movement is more strongly attracted to auditory than to visual rhythms. To the extent that this is an innate proclivity, it may have been an important factor in the evolution of music.
We investigate how the presence of performance microstructure (small variations in timing, intensity, and articulation) influences listeners' perception of musical excerpts, by measuring the way in which listeners synchronize with the excerpts. Musicians and nonmusicians tapped on a drum in synchrony with six musical excerpts, each presented in three versions: mechanical (synthesized from the score, without microstructure), accented (mechanical, with intensity accents), and expressive (performed by a concert pianist, with all types of microstructure). Participants' synchronizations with these excerpts were characterized in terms of three processes described in Mari Riess Jones's Dynamic Attending Theory: attunement (ease of synchronization), use of a referent level (spontaneous synchronization rate), and focal attending (range of synchronization levels). As predicted by beat induction models, synchronization was better with the temporally regular mechanical and accented versions than with the expressive versions. However, synchronization with expressive versions occurred at higher (slower) levels, within a narrower range of synchronization levels, and corresponded more frequently to the theoretically correct metrical hierarchy. We conclude that performance microstructure transmits a particular metrical interpretation to the listener and enables the perceptual organization of events over longer time spans. Compared with nonmusicians, musicians synchronized more accurately (heightened attunement), tapped more slowly (slower referent level), and used a wider range of hierarchical levels when instructed (enhanced focal attending), more often corresponding to the theoretically correct metrical hierarchy. We conclude that musicians perceptually organize events over longer time spans and have a more complete hierarchical representation of the music than do nonmusicians.
Systematic timing variations observed during music performance have usually been attributed to a musical expression hypothesis, related to relatively highlevel processes, by which musicians emphasize certain events in order to transmit a particular musical interpretation to the listener. We propose, in addition, a perceptual hypothesis, related to lower-level processes, in which some observed variations would be related to functional constraints of the auditory system. (Some intervals would be heard shorter and thus played longer as a phenomenon of perceptual compensation.) We present a psychological model of temporal organization proposing two types of process (regularity extraction and segmentation into groups) operating parallel that allow listeners to parse complex auditory sequences such as music. Each type of process operates at both a low processing level (beat extraction and segmentation into basic groups) and a higher processing level (hierarchical metric organization and hierarchical segmentation organization).The analysis of musical and mechanical performances of Schumann's TraÈumerei demonstrated performance variations in relation to both hierarchical segmentation and hierarchical metric organizations, and to rhythmic groups. Variations were not systematically observed in relation to melodic groups. Regression analyses quanti®ed these eects and demonstrated that hierarchical segmentation and rhythmic groups accounted for approximately 60% of the variance for musical and mechanical performances, leaving room for the description of other, as yet unidenti®ed, processes. The percentage of variance explained by high-level processes (hierarchical segmentation) decreased from musical to mechanical performances, whereas the percentage of variance explained by lower-level processes (rhythmic groups) increased.We conclude that it is important to go beyond the traditional approach of describing performance variations in relation to musical structure and to adopt the approach of studying performance variations in relation to the psychological processes that allow the musician to perceive the musical structure. Finally, we adapt the psychological model of temporal organization to expressive timing: similar psychological processes operate at multiple hierarchical levels ± namely, those of segmentation and grouping ±, and these similar processes result in the same pattern of performance variations (an accelerando/ritardando pro®le).
One reason why music features temporal regularities is that they elicit expectancies about when an event will occur, focusing a listener�s attention around certain points in time. Evidence comes from phoneme monitoring tasks (using reaction times, J. G. Martin, 1979) and pitch and time judgment tasks (using accuracy measures, M. R. Jones, H. Moynihan, N. MacKenzie,& J. Puente, 2002; E. W. Large & M. R. Jones, 1999). Reaction times were faster and accuracy was higher for rhythmically expected elements than for unexpected elements. By contrast, A. Penel and M. R. Jones (2004) recently reported an inversely related finding: faster reaction times for rhythmically unexpected tones, which they labeled a temporal capture effect. The present research examines expectancy versus capture phenomena by using a speeded detection task in which listeners must respond to a lower pitched target located within monotone and isochronous sequences. One interonset interval was shortened or lengthened independently of the target�s position. Temporal irregularities tended to trigger false alarms, suggesting capture effects. Patterns of reaction times showed expectancy effects when the temporally perturbed event preceded the target, but these effects seemed to decrease with time in the sequence. When the target itself was temporally perturbed, some capture was observed, but only when the target came early in the sequence. We conclude that Martin�s (1979) expectancy effects in phoneme monitoring were coarticulatory rather than rhythmical.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.