Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo) with overt gestures—accounting for the intrinsic link between movement and sound—are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked 64 musically trained and untrained participants to represent pure tones—continually sounding and concurrently varied in pitch, loudness and tempo—with gestures while the sound stimuli were played. We hypothesized musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy. Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space) and tempo vs. speed (increasing tempo leading to increasing speed of hand movement) associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e., rising–falling) pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement), highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied) music cognition. Implications for theoretical refinements and potential clinical applications are provided.
Previous research comparing musically trained and untrained individuals has yielded valuable insights into music cognition and behaviour. Here, we explore two aspects of musical engagement previously studied separately, auditory-visual correspondences and sensorimotor skills, in a novel real-time drawing paradigm. To that end, musically trained and untrained participants were presented with 18 short sequences of pure tones varying in pitch, loudness and tempo, as well as two short musical excerpts. Using an electronic graphics tablet, participants were asked to represent the sound stimuli visually by drawing along with them while they were played. Results revealed that the majority of participants represented pitch with height (higher on the tablet referring to higher pitches), and loudness with the thickness of the line (thicker line for louder sounds). However, musically untrained participants showed a greater diversity of representation strategies and tended to neglect pitch information if unchanged over time. Investigating the performance accuracy in a subgroup of participants revealed that, while pitch-height correspondences were generally represented more accurately than loudness-thickness correspondences, musically trained participants' representations of pitch and loudness were more accurate. Results are discussed in terms of cross-modal correspondences, the perception of time, and sensorimotor skills.Here, we investigate musically trained and untrained participants' representational strategies, and their performance accuracy, using visualizations of basic sound characteristics (pitch, loudness) and of music, in a real-time drawing paradigm. Literature on cross-modal correspondences of pitch and loudness, on visualizations of music, and on sensorimotor skills in
The standard model of musical transmission, in which composers embody their intentions in works which they encode in scores which performers (and scholars in their imagination) decode as accurately as possible for audiences, is unpicked in the light of the evidence of recorded performance. It is replaced with a model that recognizes the large extent to which performances trigger the generation of musical meaning in the minds of listeners, and the extent to which changes in performance style cause those meanings to change. Some implications for thought about music are considered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.