2019
DOI: 10.1101/755553
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

EEG-based classification of natural sounds reveals specialized responses to speech and music

Abstract: Humans can easily distinguish many sounds in the environment, but speech and music are uniquely important. Previous studies, mostly using fMRI, have identified separate regions of the brain that respond selectively for speech and music. Yet there is little evidence that brain responses are larger and more temporally precise for human-specific sounds like speech and music, as has been found for responses to species-specific sounds in other animals. We recorded EEG as healthy, adult subjects listened to various … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 56 publications
0
7
0
Order By: Relevance
“…A deeper understanding of these individual differences would help researchers predict whether a given listener will experience a particular spoken phrase as sung when repeated. This in turn would provide a powerful tool for exploring the cognitive and brain mechanisms underlying selective neural responses to speech and music (Ogg, Moraczewski, Kuchinsky, & Slevc, 2019;Zuk, Teoh, & Lalor, 2020;Boebinger, Norman-Haignere, McDermott, & Kanwisher, 2021), by using the same physical stimuli to elicit categorically different perceptual experiences.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…A deeper understanding of these individual differences would help researchers predict whether a given listener will experience a particular spoken phrase as sung when repeated. This in turn would provide a powerful tool for exploring the cognitive and brain mechanisms underlying selective neural responses to speech and music (Ogg, Moraczewski, Kuchinsky, & Slevc, 2019;Zuk, Teoh, & Lalor, 2020;Boebinger, Norman-Haignere, McDermott, & Kanwisher, 2021), by using the same physical stimuli to elicit categorically different perceptual experiences.…”
Section: Discussionmentioning
confidence: 99%
“…Music often has acoustic patterns which distinguish it from speech or other sounds (Ding et al, 2017), and also elicits distinct neural responses compared to other sounds (Norman-Haignere et al 2015;Zuk et al, 2020). Nevertheless, studies over the past decade have demonstrated that listeners sometimes report perceiving non-musical sounds (i.e., sounds not originally intended to be heard as music) as sounding like music.…”
Section: Sound-to-music Illusionsmentioning
confidence: 99%
“…Speech and music are known to differ in their temporal modulation spectra, peaking at 5 Hz and 2 Hz, respectively (Ding et al, 2017). However, standard auditory models based on spectrotemporal modulation do not capture perception of speech and music (McDermott and Simoncelli, 2011) or neural responses selective for speech and music (Overath et al, 2015;Kell et al, 2018;Norman-Haignere and McDermott, 2018;Zuk et al, 2020). In particular, the music-selective component responds substantially less to sounds that have been synthesized to have the same spectrotemporal modulation statistics as natural music, suggesting that the music component does not simply represent the audio or modulation frequencies that are prevalent in music (Norman-Haignere and McDermott, 2018).…”
Section: What Does Cortical Music Selectivity Represent?mentioning
confidence: 99%
“…By relating naturally occurring sounds in everyday life to ongoing EEG activity, we can investigate questions about auditory perception and attention in real world scenarios. There is an increasing number of studies using naturalistic stimuli to study auditory perception (e.g., De Lucia, Tzovara, Bernasconi, Spierer, & Murray, 2012;Perrin et al, 2005;Roye, Jacobsen, & Schröger, 2013;Scheer, Bülthoff, & Chuang, 2018;Zuk, Teoh, & Lalor, 2020). However, to gain experimental control in this lab-based research, everyday life sounds and contexts are approximated by using artificial stimuli and conditions.…”
Section: Introductionmentioning
confidence: 99%