2021
DOI: 10.1101/2021.01.26.428291
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

High-order areas and auditory cortex both represent the high-level event structure of music

Abstract: Recent fMRI studies of event segmentation have found that default mode regions represent high-level event structure during movie watching. In these regions, neural patterns are relatively stable during events and shift at event boundaries. Music, like narratives, contains hierarchical event structure (e.g., sections are composed of phrases). Here, we tested the hypothesis that brain activity patterns in default mode regions reflect the high-level event structure of music. We used fMRI to record brain activity … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 79 publications
(101 reference statements)
0
5
0
Order By: Relevance
“…Our results are also consistent with reports on the spatiotemporal dynamics of brain responses to naturalistic stimuli. A hierarchically nested spatial activation pattern has been revealed using movie, spoken story, and music stimuli (22, 23, 47). Chien and colleagues (7) reported a gradual alignment of context-specific spatial activation patterns, which was rapidly flushed at event boundaries, similar to the temporal integration function we adopted here.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Our results are also consistent with reports on the spatiotemporal dynamics of brain responses to naturalistic stimuli. A hierarchically nested spatial activation pattern has been revealed using movie, spoken story, and music stimuli (22, 23, 47). Chien and colleagues (7) reported a gradual alignment of context-specific spatial activation patterns, which was rapidly flushed at event boundaries, similar to the temporal integration function we adopted here.…”
Section: Discussionmentioning
confidence: 99%
“…This study relied on eight openly available spoken story datasets. Seven datasets were used from the "Narratives" collection (OpenNeuro: https://openneuro.org/datasets/ds002245) (Nastase et al, 2021), including "Sherlock" and "Merlin" (18 participants, 11 females) (Zadbood et al, 2017), "The 21 st year" (25 participants, 14 females) (Chang et al, 2021), "Pie Man (PNI)", "I Knew You Were Black", "The Man Who Forgot Ray Bradbury", and "Running from the Bronx (PNI)" (48 participants, 34 females). One dataset was used from Princeton Dataspace: "Pie Man" (36 participants, 25 females) (https://dataspace.princeton.edu/jspui/handle/88435/dsp015d86p269k) (Simony et al, 2016).…”
Section: Fmri Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…In line with these findings, some prior work has found that unfamiliar music elicits fewer autobiographical memories compared to environmental sounds or word cues, suggesting that unfamiliar music may not be a strong retrieval cue for many memories (Jakubowski & Eerola, 2021). Unfamiliar clips may shift focus away from retrieval and towards an "encoding mode" in which participants attend to sonic features, lyrics, or musical event structures (Janata, 2005;Janata et al, 2002;Williams et al, 2022). Further, participants in the current study may have focused their attention on trying to identify the unfamiliar music clips; this could have suppressed memory retrieval.…”
Section: Non-music Clips Evoked Spontaneous Memories More Often Than ...mentioning
confidence: 56%
“…Music-selective neural populations in auditory cortex might thus be responsible for extracting temporally local features that are assembled elsewhere into more abstract representations of music, including key, meter, groove, event structure, etc. (Janata et al, 2002;Brett and Grahn, 2007;Lee et al, 2011;Fedorenko et al, 2012;Matthews et al, 2020;Williams et al, 2021). It is also plausible that responses might be further modulated by top-down inputs from brain regions like frontal cortex, perhaps reflecting the important role of expectation when it comes to music perception (Koelsch et al, 2018).…”
Section: What Is Music Selectivity Then?mentioning
confidence: 99%