2010
DOI: 10.1093/cercor/bhq198
|View full text |Cite
|
Sign up to set email alerts
|

Decoding Temporal Structure in Music and Speech Relies on Shared Brain Resources but Elicits Different Fine-Scale Spatial Patterns

Abstract: Music and speech are complex sound streams with hierarchical rules of temporal organization that become elaborated over time. Here, we use functional magnetic resonance imaging to measure brain activity patterns in 20 right-handed nonmusicians as they listened to natural and temporally reordered musical and speech stimuli matched for familiarity, emotion, and valence. Heart rate variability and mean respiration rates were simultaneously measured and were found not to differ between musical and speech stimuli. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

9
133
0
1

Year Published

2012
2012
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 140 publications
(143 citation statements)
references
References 88 publications
9
133
0
1
Order By: Relevance
“…One might argue that any sequence of musical rhythms displays 1/f structure and that our experiment lacks a control condition. To address this argument, we conducted additional analyses in which each stimulus served as its own control, disrupting the temporal structure of the rhythm at all frequencies (Materials and Methods) by shuffling the note onsets globally, across the piece, but keeping note durations intact (37,38). The spectrum of the resulting "shuffled" piece was flat and resembled white noise (β = 0.02) (Fig.…”
Section: Resultsmentioning
confidence: 99%
“…One might argue that any sequence of musical rhythms displays 1/f structure and that our experiment lacks a control condition. To address this argument, we conducted additional analyses in which each stimulus served as its own control, disrupting the temporal structure of the rhythm at all frequencies (Materials and Methods) by shuffling the note onsets globally, across the piece, but keeping note durations intact (37,38). The spectrum of the resulting "shuffled" piece was flat and resembled white noise (β = 0.02) (Fig.…”
Section: Resultsmentioning
confidence: 99%
“…These findings highlight the existence of overlapping but distinct networks for music and speech within the same cortical areas. Similarly, Abrams et al [36] used multivariate pattern analysis for natural and scrambled music and speech excerpts and also found distinct brain patterns of responses to the two categories of sounds in several regions within the temporal lobe and the inferior frontal cortex. Therefore, the pattern of neural activation was distinct between music and speech, although there was overlap in the areas activated by the two domains.…”
Section: (A) Multi-voxel Pattern Analysismentioning
confidence: 97%
“…It is important to point out that, even if the stimuli are matched for emotional content, attention, memory, subjective interest, arousal and familiarity [36], any observed category differences in activation strengths and/or patterns could be owing to acoustical differences. In order to avoid this confound, at least to some extent, one can use sung melodies and spoken lyrics from songs.…”
Section: (A) Multi-voxel Pattern Analysismentioning
confidence: 99%
“…Fedorenko & Kanwisher, 2009). In fact, most of the few recent studies that have included within-subjects comparisons of linguistic and musical manipulations have not found substantial overlap between neural regions implicated in the processing of language and music (but see Abrams et al, 2011). For example, Fedorenko and colleagues (Fedorenko, Behr, & Kanwisher, 2011;Fedorenko, McDermott, Norman-Haignere, & Kanwisher, 2012) used a contrast between intact sentences and lists of unconnected words (visually presented word-by-word) to define a series of language-sensitive brain regions of interest (ROIs) for each participant, and then investigated whether a musical manipulation significantly engaged those same regions.…”
Section: Music/language Interactions and The Shared Syntactic Integramentioning
confidence: 99%
“…In addition, manipulations of harmonic structure in fMRI paradigms show effects in brain areas typically associated with linguistic syntax including (most relevant to the following discussion) left inferior frontal regions, i.e., Broca's area (Janata, Tillmann, & Bharucha, 2002;Koelsch et al, 2002;Koelsch, Fritz, Schulze, Alsop, & Schlaug, 2005a;Minati et al, 2008;Oechslin, Van De Ville, Lazeyras, Hauert, & James, 2013;Tillmann, Janata, & Bharucha, 2003;Tillmann et al, 2006;Seger et al, 2013). These inferior frontal regions have also been implicated in the processing of rhythmic structure (Vuust, Roepstorff, Wallentin, Mouridsen, & Østergaard, 2006;Vuust, Wallentin, Mouridsen, Østergaard, & Roepstorff, 2011), and both frontal and temporal regions show equal sensitivity to temporal structure in music and speech (Abrams et al, 2011). Finally, there is a growing body of behavioral evidence linking the processing of musical and linguistic structure (e.g., Hoch, Poulin-Charronnat, & Tillmann, 2011;Fedorenko, Patel, Casasanto, Winawer, & Gibson, 2009;Slevc, Rosenberg, & Patel, 2009), as discussed below.…”
mentioning
confidence: 99%