2022
DOI: 10.1523/jneurosci.1243-21.2022
|View full text |Cite
|
Sign up to set email alerts
|

A Distributed Network for Multimodal Experiential Representation of Concepts

Abstract: The architecture of the cortical system underlying concept representation is a topic of intense debate. Much evidence supports the claim that concept retrieval selectively engages sensory, motor, and other neural systems involved in the acquisition of the retrieved concept, yet there is also strong evidence for involvement of high-level, supramodal cortical regions. A fundamental question about the organization of this system is whether modality-specific information originating from sensory and motor areas is … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

7
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 24 publications
(18 citation statements)
references
References 74 publications
7
9
0
Order By: Relevance
“…MVPA-including decoding and representational similarity analysis (RSA)-tests for information represented in fine-grained, multi-voxel activity patterns (Haxby et al, 2014;Norman et al, 2006). RSA has recently been used to relate computational models of semantics to the brain, which revealed that a grounded perceptual-motor model better explains brain representations (including in multimodal regions) than taxonomic categories or distributional information (Fernandino et al, 2022;Tong et al, 2022). These findings clearly corroborate our results and our model.…”
Section: Evidence For Hybrid Theories Of Conceptual Processingsupporting
confidence: 87%
See 1 more Smart Citation
“…MVPA-including decoding and representational similarity analysis (RSA)-tests for information represented in fine-grained, multi-voxel activity patterns (Haxby et al, 2014;Norman et al, 2006). RSA has recently been used to relate computational models of semantics to the brain, which revealed that a grounded perceptual-motor model better explains brain representations (including in multimodal regions) than taxonomic categories or distributional information (Fernandino et al, 2022;Tong et al, 2022). These findings clearly corroborate our results and our model.…”
Section: Evidence For Hybrid Theories Of Conceptual Processingsupporting
confidence: 87%
“…While such "cross-modality spreading" cannot be completely excluded, it is unlikely to explain all multimodal activations, especially in the trimodal IPL and pMTG. Individual studies found multimodal effects in left IPL and pMTG, even when the individual modalities were controlled for (Kuhnke et al, 2020b;Tong et al, 2022). Many experiments included in this meta-analysis similarly isolated modality-specific activity, while controlling for other modalities (e.g.…”
Section: Multimodal Convergence Zones For Conceptual Processingmentioning
confidence: 99%
“…We had previously shown that individual words could be decoded from fMRI activation patterns in these areas using a multimodal sensory-motor model, but not with a model based on ortho-phonological features of the corresponding word forms (Fernandino, 2016b). Furthermore, the similarity structure of fMRI activation patterns in these regions predicts the semantic similarity structure of both object and event nouns, and it does so significantly more accurately when semantic similarity is estimated based on experiential features than when it is based on taxonomic or distributional information (Fernandino et al, 2022;Tong et al, 2022).…”
Section: Discussionmentioning
confidence: 94%
“…For instance, action features are represented in somatomotor regions (Hauk et al, 2004;Tettamanti et al, 2005;Vukovic et al, 2017), while sound features are represented in auditory areas (Bonner and Grossman, 2012;Kiefer et al, , 2008Trumpp et al, 2013). Cross-modal convergence zones integrate modality-specific features into more abstract, cross-modal representations (Binder, 2016;Fernandino et al, 2016a;Kuhnke et al, 2023Kuhnke et al, , 2020bTong et al, 2022). We previously proposed a distinction among cross-modal convergence zones between "multimodal" regions which retain modality-specific information, and "amodal" regions which completely abstract away from modality-specific input (Kuhnke et al, 2023(Kuhnke et al, , 2022(Kuhnke et al, , 2020b.…”
Section: Introductionmentioning
confidence: 99%