2017
DOI: 10.3389/fnins.2016.00579
|View full text |Cite
|
Sign up to set email alerts
|

Divergent Human Cortical Regions for Processing Distinct Acoustic-Semantic Categories of Natural Sounds: Animal Action Sounds vs. Vocalizations

Abstract: A major gap in our understanding of natural sound processing is knowledge of where or how in a cortical hierarchy differential processing leads to categorical perception at a semantic level. Here, using functional magnetic resonance imaging (fMRI) we sought to determine if and where cortical pathways in humans might diverge for processing action sounds vs. vocalizations as distinct acoustic-semantic categories of real-world sound when matched for duration and intensity. This was tested by using relatively less… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
11
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 7 publications
(12 citation statements)
references
References 148 publications
(200 reference statements)
1
11
0
Order By: Relevance
“…3, labeled yellow region) shows interaction or integration effects when corresponding sounds are also present (Calvert et al, 2000; Beauchamp et al, 2004b, 2004a; Taylor et al, 2006, 2009; Campanella and Belin, 2007; Campbell, 2008), and are generally activated by human action sounds in the absence of visual input (Lewis et al, 2004, 2006; Bidet-Caulet et al., 2005; Gazzola et al, 2006; Galati et al, 2008; Engel et al, 2009). These regions were further shown to be more strongly activated by human action sounds relative to non-human animal action sounds, and lesser still by non-living action sounds (Engel et al, 2009) or vocalizations (Webster et al, 2017). Thus, from a bottom-up signal processing perspective, these complexes appear to play a prominent perceptual role in transforming the spatially and temporally dynamic features of natural auditory (and visual) action information into a common neural code, conveying symbolic associations of physically matched audio-visual features.…”
Section: Bottom-up Perspectives Of Vision and Hearing Modelsmentioning
confidence: 95%
See 3 more Smart Citations
“…3, labeled yellow region) shows interaction or integration effects when corresponding sounds are also present (Calvert et al, 2000; Beauchamp et al, 2004b, 2004a; Taylor et al, 2006, 2009; Campanella and Belin, 2007; Campbell, 2008), and are generally activated by human action sounds in the absence of visual input (Lewis et al, 2004, 2006; Bidet-Caulet et al., 2005; Gazzola et al, 2006; Galati et al, 2008; Engel et al, 2009). These regions were further shown to be more strongly activated by human action sounds relative to non-human animal action sounds, and lesser still by non-living action sounds (Engel et al, 2009) or vocalizations (Webster et al, 2017). Thus, from a bottom-up signal processing perspective, these complexes appear to play a prominent perceptual role in transforming the spatially and temporally dynamic features of natural auditory (and visual) action information into a common neural code, conveying symbolic associations of physically matched audio-visual features.…”
Section: Bottom-up Perspectives Of Vision and Hearing Modelsmentioning
confidence: 95%
“…A recent fMRI study by our group directly tested where auditory pathways for processing action sounds by living things versus vocalizations might diverge (Webster et al, 2017). Non-human animal vocalizations and non-human action sounds were used to minimize confounds associated with potentially greater semantic processing of conspecific sounds.…”
Section: Bottom-up Perspectives Of Vision and Hearing Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…This included three basic categories of soundsource: (1) action sounds (non-vocalizations) produced by ‘living things’, with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by ‘nonliving things’, including environmental sounds and human-made machinery; and (3) vocalizations (‘living things’), with human versus nonhuman animals as two subcategories therein. This model was supported in a study that utilized non-human animal action sounds and vocalizations (also used in the present study), which minimized potential confounds related to the processing of deeper semantic encodings in meaning conveyed by commonly experienced (“over-learned”) human conspecific sounds (Webster et al, 2017). The goal of the present study was to determine if this same basic organizational principle, namely the processing along separable cortical pathways, might also be respected in some of the cortical regions involved in planning and orchestrating oral mimicry of these same sounds at a categorical level.…”
Section: Introductionmentioning
confidence: 65%