2009
DOI: 10.1016/j.neuroimage.2009.05.041
|View full text |Cite
|
Sign up to set email alerts
|

Different categories of living and non-living sound-sources activate distinct cortical networks

Abstract: With regard to hearing perception, it remains unclear as to whether, or the extent to which, different conceptual categories of real-world sounds and related categorical knowledge are differentially represented in the brain. Semantic knowledge representations are reported to include the major divisions of living versus non-living things, plus more specific categories including animals, tools, biological motion, faces, and places-categories typically defined by their characteristic visual features. Here, we use… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

11
96
2

Year Published

2011
2011
2019
2019

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 89 publications
(109 citation statements)
references
References 109 publications
(150 reference statements)
11
96
2
Order By: Relevance
“…2, yellow blobs and Table III), who performed the task recruiting only the left inferior and superior frontal gyri and the bilateral cerebellum. Notably, they recruited none of the brain regions usually involved in environmental sound acoustic analysis and representation (i.e., the left parietal lobule and the right posterior temporal cortex) [Engel et al, 2009;Kraut et al, 2006;Lewis et al, 2004]. No activation during sound relative to color imagery in the CI candidate group was observed.…”
Section: Sound Imagery In Normal Hearing and Postlingually Deaf Subjectsmentioning
confidence: 88%
“…2, yellow blobs and Table III), who performed the task recruiting only the left inferior and superior frontal gyri and the bilateral cerebellum. Notably, they recruited none of the brain regions usually involved in environmental sound acoustic analysis and representation (i.e., the left parietal lobule and the right posterior temporal cortex) [Engel et al, 2009;Kraut et al, 2006;Lewis et al, 2004]. No activation during sound relative to color imagery in the CI candidate group was observed.…”
Section: Sound Imagery In Normal Hearing and Postlingually Deaf Subjectsmentioning
confidence: 88%
“…We offer the hypothesis that this change occurred early in the time course of auditory deprivation for the right posterior temporal cortex, because its activation correlated with the duration of hearing loss rather than with the duration of deafness. Because the right posterior temporal cortex is specialized in multimodal integration of NSS and music (Beauchamp et al, 2004;Doehrmann & Naumer, 2008;Engel et al, 2009;Lewis et al, 2004;Zatorre & Halpern, 1993), when auditory inputs weaken and oral communication becomes cognitively more demanding, the involvement of the right posterior temporal cortex in NSS processing may decrease. This loss of function could potentially make available cognitive resources for phonological processing, as suggested by the observation that this region is abnormally recruited in post-lingual deaf subjects during phonological tasks (Lazard et al, 2010) or during lip reading (Lee, Truy, et al, 2007).…”
Section: Discussionmentioning
confidence: 99%
“…Sound imagery in normal-hearing subjects activated multimodal cognitive areas, the bilateral frontal and left parietotemporal areas (Kraut et al, 2006;Leff et al, 2008;Scott, Blank, Rosen, & Wise, 2000;Shannon & Buckner, 2004), and areas dedicated to NSS processing such as the right posterior temporal cortex and the left insula (Beauchamp, Lee, Argall, & Martin, 2004; Doehrmann & Naumer, 2008;Engel, Frum, Puce, Walker, & Lewis, 2009;Halpern & Zatorre, 1999;Lewis et al, 2004;Thierry et al, 2003;Zatorre & Halpern, 1993) (Fig. 2, grey blobs and Table 2).…”
Section: The Non-speech Sound Imagery Network In Normal-hearing and Dmentioning
confidence: 99%
“…The selectivity of the ventral stream for the meaning of sounds has been confirmed in a series of studies using different categories of environmental sounds as stimuli (e.g., tools, animals, man-made objects, living and non-living sound sources, actions, and musical instruments; Lewis et al 2005;Murray et al 2006;Altmann et al 2007;Engel et al 2009;Leaver and Rauschecker 2010;Lewis et al 2011). The processing which leads eventually to sound recognition involves two sequential steps, which are hierarchically organized.…”
Section: The Ventral and Dorsal Auditory Streamsmentioning
confidence: 82%