2015
DOI: 10.1093/scan/nsv060
|View full text |Cite
|
Sign up to set email alerts
|

Temporal dynamics of musical emotions examined through intersubject synchrony of brain activity

Abstract: To study emotional reactions to music, it is important to consider the temporal dynamics of both affective responses and underlying brain activity. Here, we investigated emotions induced by music using functional magnetic resonance imaging (fMRI) with a data-driven approach based on intersubject correlations (ISC). This method allowed us to identify moments in the music that produced similar brain activity (i.e. synchrony) among listeners under relatively natural listening conditions. Continuous ratings of sub… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
69
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 70 publications
(72 citation statements)
references
References 76 publications
2
69
0
Order By: Relevance
“…Whispered vocalizations were temporally more regular and had only marginal spectral information. Thus, the functional connectivity with brain regions decoding this information seems to be of high relevance (Trost et al, 2015;Trost et al, 2014). These neural network data together suggest that a large-scale multidirectional bottomup and top-down brain network compensates for the impoverished sound quality of whispered voices to support their accurate recognition and value attribution.…”
Section: Discussionmentioning
confidence: 89%
“…Whispered vocalizations were temporally more regular and had only marginal spectral information. Thus, the functional connectivity with brain regions decoding this information seems to be of high relevance (Trost et al, 2015;Trost et al, 2014). These neural network data together suggest that a large-scale multidirectional bottomup and top-down brain network compensates for the impoverished sound quality of whispered voices to support their accurate recognition and value attribution.…”
Section: Discussionmentioning
confidence: 89%
“…The differences in synchronization can be further explored across groups or conditions. Such a methodology with natural stimulus presentations has flourished and been applied to wide variety of FMRI experiments, such as visuoauditory movie stimuli (Hasson et al, 2004, 2008; Golland et al, 2007; Jääskeläinen et al, 2008; Kauppi et al, 2010; Nummenmaa et al, 2012), the synchronization of emotion (Nummenmaa et al, 2012), the impact of mass media coverage on various perceptions of the H1N1 pandemic (Schmälzle et al, 2013), real-world thought processing (e.g., educational television viewing of Sesame Street) between children and adults (Cantlon and Li, 2013), videos of dance performance (Herbec et al, 2015), narratives (Wilson et al, 2008), music (Abrams et al, 2013; Alluri et al, 2013; Thiede, 2014; Trost et al, 2015, Lillywhite et al, 2015), aesthetic performance (Jola et al, 2013), neural responses shared across languages (Honey et al, 2012), and political speeches (Schmälzle et al, 2015). In addition, applications have been seen in other neuroimaging modalities, such as MEG (Thiede, 2014), EEG (Bridwell et al, 2015), and ECoG (Potes et al, 2014).…”
Section: Introductionmentioning
confidence: 99%
“…Using this technique, we identified several networks, particularly in the bilateral perisylvian areas, which are specifically activated when listening to a musical piece and scales. Thus, the identified networks comprise brain areas that are known to be strongly involved in complex auditory processing and music listening in particular [1,35].…”
Section: Independent Functional Network and Music Listening Jäncke Amentioning
confidence: 99%
“…In this context, many published studies have used functional MRI (fMRI) to delineate the neural underpinnings of music perception [1][2][3][4][5][6][7][8][9][10]. In general, these studies have shown that the limbic system as well as cortical areas outside the auditory areas (e.g.…”
Section: Introductionmentioning
confidence: 99%