2020
DOI: 10.1007/s00426-020-01429-7
|View full text |Cite
|
Sign up to set email alerts
|

Images of the unseen: extrapolating visual representations for abstract and concrete words in a data-driven computational model

Abstract: Theories of grounded cognition assume that conceptual representations are grounded in sensorimotor experience. However, abstract concepts such as jealousy or childhood have no directly associated referents with which such sensorimotor experience can be made; therefore, the grounding of abstract concepts has long been a topic of debate. Here, we propose (a) that systematic relations exist between semantic representations learned from language on the one hand and perceptual experience on the other hand, (b) that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
38
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 21 publications
(38 citation statements)
references
References 100 publications
0
38
0
Order By: Relevance
“…used vision-based representations to demonstrate perception-based conceptual combination effects during the processing of compound words such as swordfish. In another study, Günther, Petilli, Vergallito, and Marelli (2020) found that vision-based representations can, with some success, be predicted from text-based distributional vectors, offering a mechanism via which non-experienced objects can be grounded in visual experience. Similarly, Lazaridou et al (2017) demonstrated that visual intuitions for novel words learned from text alone can be predicted from their model combining textual with visual representations.…”
Section: In Cognitive Science Vision-based Representations Obtained From Computer-visionmentioning
confidence: 99%
See 3 more Smart Citations
“…used vision-based representations to demonstrate perception-based conceptual combination effects during the processing of compound words such as swordfish. In another study, Günther, Petilli, Vergallito, and Marelli (2020) found that vision-based representations can, with some success, be predicted from text-based distributional vectors, offering a mechanism via which non-experienced objects can be grounded in visual experience. Similarly, Lazaridou et al (2017) demonstrated that visual intuitions for novel words learned from text alone can be predicted from their model combining textual with visual representations.…”
Section: In Cognitive Science Vision-based Representations Obtained From Computer-visionmentioning
confidence: 99%
“…In the present article, we present the ViSpa (Vision Spaces) system, an adaptation of the VGG-F model that, in addition to representations for individual images, includes prototypical vision-based representations for concepts. Precursors of ViSpa were already employed in some of the studies outlined in the previous section Günther, Petilli, Vergallito, & Marelli, 2020;Petilli et al, 2021).…”
Section: Vispa: Vision Spacesmentioning
confidence: 99%
See 2 more Smart Citations
“…Theoretical proposals like the language and situated simulation theory (Barsalou, Santos, Simmons, & Wilson, 2008) or the symbol interdependency hypothesis (Louwerse, 2011(Louwerse, , 2018 have been developed to explain how these amodal representations can generate emotional and other sensorimotor word representations. Additionally, some mapping mechanisms have also been proposed in vector space models to act as a link between amodal and modal representations Günther, Petilli, Vergallito, & Marelli, 2020;Hollis, Westbury, & Lefsrud, 2017;Martínez-Huertas, Jorge-Botana, Luzón, & Olmos, 2021). In some contexts, this perspective has been termed the specific dimensionality hypothesis, as only some parts of the amodal representation seem to be mapping both formats of representation.…”
Section: Introductionmentioning
confidence: 99%