Investigations of the psychological representation for musical meter provided evidence for an internalized hierarchy from 3 sources: frequency distributions in musical compositions, goodnessof-fit judgments of temporal patterns in metrical contexts, and memory confusions in discrimination judgments. The frequency with which musical events occurred in different temporal locations differentiates one meter from another and coincides with music-theoretic predictions of accent placement. Goodness-of-fit judgments for events presented in metrical contexts indicated a multileveled hierarchy of relative accent strength, with finer differentiation among hierarchical levels by musically experienced than inexperienced listeners. Memory confusions of temporal patterns in a discrimination task were characterized by the same hierarchy of inferred accent strength. These findings suggest mental representations for structural regularities underlying musical meter that influence perceiving, remembering, and composing music.Perception of music, speech, and other complex human behaviors requires the processing of structured information over time. Psychological theories of serially ordered behaviors often reveal hierarchical principles of mental processing and organization that express relations among nonadjacent as well as adjacent events. Mental representations for these behaviors suggest that the complex information is recoded or organized in a form more efficient for abstract operations. A primary assumption is that the observed behavior involves complex mental processes that transform early sensory information, compare it to detailed memories, and apply decision rules to the transformed internal codes.This theoretical framework suggests that music perception involves the recoding and organizing of musical material through reference to a more abstract system of knowledge about musical structure. This abstract knowledge often represents the underlying regularities found in one's own musical culture, such as a particular tonal system or common metrical properties. These mental structures may facilitate comprehension of global aspects of musical structure and lead to expectations about future events. Thus, tonality can provide a (pitch-based) framework for melodic expectations, and meter may provide a (time-based) framework from which temporal expectations are formed. The research described later focuses on the nature of mental representation of one important aspect of musical structure: meter. We present evidence indicating that abstract knowledge of meter affects comprehension, memory, and composition of Western tonal music.
Music performance provides a rich domain for study of both cognitive and motor skills. Empirical research in music performance is summarized, with particular emphasis on factors that contribute to the formation of conceptual interpretations, retrieval from memory of musical structures, and transformation into appropriate motor actions. For example, structural and emotional factors that contribute to performers' conceptual interpretations are considered. Research on the planning of musical sequences for production is reviewed, including hierarchical and associative retrieval influences, style-specific syntactic influences, and constraints on the range of planning. The fine motor control evidenced in music performance is discussed in terms of internal timekeeper models, motor programs, and kinematic models. The perceptual consequences of music performance are highlighted, including the successful communication of interpretations, resolution of structural ambiguities, and concordance with listeners' expectations. Parallels with other domains support the conclusion that music performance is not unique in its underlying cognitive mechanisms.
WE INVESTIGATED INFLUENCES OF AUDITORY FEEDBACK, musical role, and note ratio on synchronization in ensemble performance. Pianists performed duets on a piano keyboard; the pianist playing the upper part was designated the leader and the other pianist was the follower. They received full auditory feedback, one-way feedback (leaders heard themselves while followers heard both parts), or self-feedback only. The upper part contained more, fewer, or equal numbers of notes relative to the lower part. Temporal asynchronies increased as auditory feedback decreased: The pianist playing more notes preceded the other pianist, and this tendency increased with reduced feedback. Interonset timing suggested bidirectional adjustments during full feedback despite the leader/follower instruction, and unidirectional adjustment only during reduced feedback. Motion analyses indicated that leaders raised fingers higher and pianists' head movements became more synchronized as auditory feedback was reduced. These findings suggest that visual cues became more important when auditory information was absent.
Expressive timing methods are described that map pianists' musical thoughts to sounded performance. In Experiment 1, 6 pianists performed the same musical excerpt on a computer-monitored keyboard. Each performance contained 3 expressive timing patterns: chord asynchronies, rubato patterns, and overlaps (staccato and legato). Each pattern was strongest in experienced pianists' performances and decreased when pianists attempted to play unmusically. In Experiment 2 pianists performed another musical excerpt and notated their musical intentions on an unedited score. The notated interpretations correlated with the presence of the 3 methods: The notated melody preceded other events in chords (chord asynchrony); events notated as phase boundaries showed greatest tempo changes (rubato); and the notated melody showed most consistent amount of overlap between adjacent events (staccato and legato). These results suggest that the mapping of musical thought to musical action is rule-governed, and the same rules produce different interpretations.
People produce long sequences such as speech and music with incremental planning: mental preparation of a subset of sequence events. The authors model in music performance the sequence events that can be retrieved and prepared during production. Events are encoded in terms of their serial order and timing relative to other events in a planning increment, a contextually determined distribution of event activations. Planning is facilitated by events' metrical similarity and serial/temporal proximity and by developmental changes in short-term memory. The model's predictions of larger planning increments as production rate decreases and as producers' age-experience increases are confirmed in serial-ordering errors produced by adults and children. Incremental planning is considered as a general retrieval constraint in serially ordered behaviors.
The classical, disembodied approach to music cognition conceptualizes action and perception as separate, peripheral processes. In contrast, embodied accounts of music cognition emphasize the central role of the close coupling of action and perception. It is a commonly established fact that perception spurs action tendencies. We present a theoretical framework that captures the ways in which the human motor system and its actions can reciprocally influence the perception of music. The cornerstone of this framework is the common coding theory, postulating a representational overlap in the brain between the planning, the execution, and the perception of movement. The integration of action and perception in so-called internal models is explained as a result of associative learning processes. Characteristic of internal models is that they allow intended or perceived sensory states to be transferred into corresponding motor commands (inverse modeling), and vice versa, to predict the sensory outcomes of planned actions (forward modeling). Embodied accounts typically refer to inverse modeling to explain action effects on music perception (Leman, 2007). We extend this account by pinpointing forward modeling as an alternative mechanism by which action can modulate perception. We provide an extensive overview of recent empirical evidence in support of this idea. Additionally, we demonstrate that motor dysfunctions can cause perceptual disabilities, supporting the main idea of the paper that the human motor system plays a functional role in auditory perception. The finding that music perception is shaped by the human motor system and its actions suggests that the musical mind is highly embodied. However, we advocate for a more radical approach to embodied (music) cognition in the sense that it needs to be considered as a dynamical process, in which aspects of action, perception, introspection, and social interaction are of crucial importance.
We address how listeners perceive temporal regularity in music performances, which are rich in temporal irregularities. A computational model is described in which a small system of internal self-sustained oscillations, operating at different periods with specific phase and period relations, entrains to the rhythms of music performances. Based on temporal expectancies embodied by the oscillations, the model predicts the categorization of temporally changing event intervals into discrete metrical categories, as well as the perceptual salience of deviations from these categories. The model's predictions are tested in two experiments using piano performances of the same music with different phrase structure interpretations (Experiment 1) or different melodic interpretations (Experiment 2). The model successfully tracked temporal regularity amidst the temporal fluctuations found in the performances. The model's sensitivity to performed deviations from its temporal expectations compared favorably with the performers' structural (phrasal and melodic) intentions. Furthermore, the model tracked normal performances (with increased temporal variability) better than performances in which temporal fluctuations associated with individual voices were removed (with decreased variability). The small, systematic temporal irregularities characteristic of human performances (chord asynchronies) improved tracking, but randomly generated temporal irregularities did not. These findings suggest that perception of temporal regularity in complex musical sequences is based on temporal expectancies that adapt in response to temporally fluctuating input.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.