Human emotion and its electrophysiological correlates are still poorly understood. The present study examined whether the valence of perceived emotions would differentially influence EEG power spectra and heart rate (HR). Pleasant and unpleasant emotions were induced by consonant and dissonant music. Unpleasant (compared to pleasant) music evoked a significant decrease of HR, replicating the pattern of HR responses previously described for the processing of emotional pictures, sounds, and films. In the EEG, pleasant (contrasted to unpleasant) music was associated with an increase of frontal midline (Fm) theta power. This effect is taken to reflect emotional processing in close interaction with attentional functions. These findings show that Fm theta is modulated by emotion more strongly than previously believed.
Semantics is a key feature of language, but whether or not music can activate brain mechanisms related to the processing of semantic meaning is not known. We compared processing of semantic meaning in language and music, investigating the semantic priming effect as indexed by behavioral measures and by the N400 component of the event-related brain potential (ERP) measured by electroencephalography (EEG). Human subjects were presented visually with target words after hearing either a spoken sentence or a musical excerpt. Target words that were semantically unrelated to prime sentences elicited a larger N400 than did target words that were preceded by semantically related sentences. In addition, target words that were preceded by semantically unrelated musical primes showed a similar N400 effect, as compared to target words preceded by related musical primes. The N400 priming effect did not differ between language and music with respect to time course, strength or neural generators. Our results indicate that both music and language can prime the meaning of a word, and that music can, as language, determine physiological indices of semantic processing.
It has long been debated which aspects of music perception are universal and which are developed only after exposure to a specific musical culture. Here, we report a crosscultural study with participants from a native African population (Mafa) and Western participants, with both groups being naive to the music of the other respective culture. Experiment 1 investigated the ability to recognize three basic emotions (happy, sad, scared/fearful) expressed in Western music. Results show that the Mafas recognized happy, sad, and scared/fearful Western music excerpts above chance, indicating that the expression of these basic emotions in Western music can be recognized universally. Experiment 2 examined how a spectral manipulation of original, naturalistic music affects the perceived pleasantness of music in Western as well as in Mafa listeners. The spectral manipulation modified, among other factors, the sensory dissonance of the music. The data show that both groups preferred original Western music and also original Mafa music over their spectrally manipulated versions. It is likely that the sensory dissonance produced by the spectral manipulation was at least partly responsible for this effect, suggesting that consonance and permanent sensory dissonance universally influence the perceived pleasantness of music.
Abstract& The present study investigated simultaneous processing of language and music using visually presented sentences and auditorily presented chord sequences. Music-syntactically regular and irregular chord functions were presented synchronously with syntactically correct or incorrect words, or with words that had either a high or a low semantic cloze probability. Music-syntactically irregular chords elicited an early right anterior negativity (ERAN). Syntactically incorrect words elicited a left anterior negativity (LAN). The LAN was clearly reduced when words were presented simultaneously with music-syntactically irregular chord functions. Processing of high and low cloze-probability words as indexed by the N400 was not affected by the presentation of irregular chord functions. In a control experiment, the LAN was not affected by physically deviant tones that elicited a mismatch negativity (MMN). Results demonstrate that processing of musical syntax (as reflected in the ERAN) interacts with the processing of linguistic syntax (as reflected in the LAN), and that this interaction is not due to a general effect of deviance-related negativities that precede an LAN. Findings thus indicate a strong overlap of neural resources involved in the processing of syntax in language and music. &
This study investigates the functional architecture of working memory (WM) for verbal and tonal information during rehearsal and articulatory suppression. Participants were presented with strings of four sung syllables with the task to remember either the pitches (tonal information) or the syllables (verbal information). Rehearsal of verbal, as well as of tonal information activated a network comprising ventrolateral premotor cortex (encroaching Broca's area), dorsal premotor cortex, the planum temporale, inferior parietal lobe, the anterior insula, subcortical structures (basal ganglia and thalamus), as well as the cerebellum. The topography of activations was virtually identical for the rehearsal of syllables and pitches, showing a remarkable overlap of the WM components for the rehearsal of verbal and tonal information. When the WM task was performed under articulatory suppression, activations in those areas decreased, while additional activations arose in anterior prefrontal areas. These prefrontal areas might contain additional storage components of verbal and tonal WM that are activated when auditory information cannot be rehearsed. As in the rehearsal conditions, the topography of activations under articulatory suppression was nearly identical for the verbal as compared to the tonal task. Results indicate that both the rehearsal of verbal and tonal information, as well as storage of verbal and tonal information relies on strongly overlapping neuronal networks. These networks appear to partly consist of sensorimotor-related circuits which provide resources for the representation and maintenance of information, and which are remarkably similar for the production of speech and song.
The present study investigated music-syntactic processing with chord sequences that ended on either regular or irregular chord functions. Sequences were composed such that perceived differences in the cognitive processing between syntactically regular and irregular chords could not be due to the sensory processing of acoustic factors like pitch repetition, pitch commonality (the major component of "sensory dissonance"), or roughness. Three experiments with independent groups of subjects were conducted: a behavioral experiment and two experiments using electroencephalography. Irregular chords elicited an early right anterior negativity (ERAN) in the event-related brain potentials (ERPs) under both task-relevant and task-irrelevant conditions. Behaviorally, participants detected around 75% of the irregular chords, indicating that these chords were only moderately salient. Nevertheless, the irregular chords reliably elicited clear ERP effects. Amateur musicians were slightly more sensitive to musical irregularities than nonmusicians, supporting previous studies demonstrating effects of musical training on music-syntactic processing. The findings indicate that the ERAN is an index of music-syntactic processing and that the ERAN can be elicited even when irregular chords are not detectable based on acoustical factors such as pitch repetition, sensory dissonance, or roughness.
Our vocal tone--the prosody--contributes a lot to the meaning of speech beyond the actual words. Indeed, the hesitant tone of a "yes" may be more telling than its affirmative lexical meaning. The human brain contains dorsal and ventral processing streams in the left hemisphere that underlie core linguistic abilities such as phonology, syntax, and semantics. Whether or not prosody--a reportedly right-hemispheric faculty--involves analogous processing streams is a matter of debate. Functional connectivity studies on prosody leave no doubt about the existence of such streams, but opinions diverge on whether information travels along dorsal or ventral pathways. Here we show, with a novel paradigm using audio morphing combined with multimodal neuroimaging and brain stimulation, that prosody perception takes dual routes along dorsal and ventral pathways in the right hemisphere. In experiment 1, categorization of speech stimuli that gradually varied in their prosodic pitch contour (between statement and question) involved (1) an auditory ventral pathway along the superior temporal lobe and (2) auditory-motor dorsal pathways connecting posterior temporal and inferior frontal/premotor areas. In experiment 2, inhibitory stimulation of right premotor cortex as a key node of the dorsal stream decreased participants' performance in prosody categorization, arguing for a motor involvement in prosody perception. These data draw a dual-stream picture of prosodic processing that parallels the established left-hemispheric multi-stream architecture of language, but with relative rightward asymmetry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.