The social cognitive basis of music processing has long been noted, and recent research has shown that trait empathy is linked to musical preferences and listening style. Does empathy modulate neural responses to musical sounds? We designed two functional magnetic resonance imaging (fMRI) experiments to address this question. In Experiment 1, subjects listened to brief isolated musical timbres while being scanned. In Experiment 2, subjects listened to excerpts of music in four conditions (familiar liked (FL)/disliked and unfamiliar liked (UL)/disliked). For both types of musical stimuli, emotional and cognitive forms of trait empathy modulated activity in sensorimotor and cognitive areas: in the first experiment, empathy was primarily correlated with activity in supplementary motor area (SMA), inferior frontal gyrus (IFG) and insula; in Experiment 2, empathy was mainly correlated with activity in prefrontal, temporo-parietal and reward areas. Taken together, these findings reveal the interactions between bottom-up and top-down mechanisms of empathy in response to musical sounds, in line with recent findings from other cognitive domains.
What does the common descriptive lexicon for instrumental sound tell us about how we conceptualize musical timbre? Perceptual studies have revealed a number of verbal attributes that reliably map onto timbral qualities, but the conventions of timbre description in spoken and written discourse remain poorly understood. Books on orchestration provide a valuable source of natural language about instrumental timbre. This article uses methods from corpus linguistics to explore the semantic features of timbre through a quantitative analysis of 11 orchestration treatises and manuals. Findings reveal a relatively constrained vocabulary for timbre: about 50 adjectives account for half of all descriptions in the corpus. The timbre lexicon can be categorized according to affect, matter, cross-modal correspondence, mimesis, action, acoustics, and onomatopoeia, and further reduced to three latent conceptual dimensions, which are labeled and discussed. Descriptive patterns vary systematically by instrument and instrument family, suggesting certain regularities and consistencies to timbre description in the orchestral tradition. This study helps test the longheld assumption that conventions of timbre description are vague and unsystematic, and offers a cognitive linguistic account of the timbre-language connection.
Many adjectives for musical timbre reflect cross-modal correspondence, particularly with vision and touch (e.g., "darkbright," "smooth-rough"). Although multisensory integration between visual/tactile processing and hearing has been demonstrated for pitch and loudness, timbre is not well understood as a locus of cross-modal mappings. Are people consistent in these semantic associations? Do cross-modal terms reflect dimensional interactions in timbre processing? Here I designed two experiments to investigate crosstalk between timbre semantics and perception through the use of Stroop-type speeded classification. Experiment 1 found that incongruent pairings of instrument timbres and written names caused significant Stroop-type interference relative to congruent pairs, indicating bidirectional crosstalk between semantic and auditory modalities. Pre-Experiment 2 asked participants to rate natural and synthesized timbres on semantic differential scales capturing luminance (brightness) and texture (roughness) associations, finding substantial consistency for a number of timbres. Acoustic correlates of these associations were also assessed, indicating an important role for high-frequency energy in the intensity of cross-modal ratings. Experiment 2 used timbre adjectives and sound stimuli validated in the previous experiment in two variants of a semantic-auditory Stroop-type task. Results of linear mixed-effects modeling of reaction time and accuracy showed slight interference in semantic processing when adjectives were paired with cross-modally incongruent instrument timbres (e.g., the word "smooth" with a "rough" timbre). Taken together, I conclude by suggesting that semantic crosstalk in timbre processing may be partially automatic and could reflect weak synesthetic congruency between interconnected sensory domains.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.