2021
DOI: 10.1111/cogs.12922
|View full text |Cite
|
Sign up to set email alerts
|

Visual and Affective Multimodal Models of Word Meaning in Language and Mind

Abstract: One of the main limitations of natural language‐based approaches to meaning is that they do not incorporate multimodal representations the way humans do. In this study, we evaluate how well different kinds of models account for people's representations of both concrete and abstract concepts. The models we compare include unimodal distributional linguistic models as well as multimodal models which combine linguistic with perceptual or affective information. There are two types of linguistic models: those based … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
35
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 34 publications
(42 citation statements)
references
References 75 publications
(153 reference statements)
5
35
0
Order By: Relevance
“…Notably, this superiority held even when comparing SWOW-PPMI-SVD to our word2vec representations, whose SGNS learning algorithm is equivalent to factorization (e.g., SVD) of a PMI matrix (Levy & Goldberg, 2014). As we noted in the first Interim discussion, this result is consistent with other work comparing text-based vectors and vectors from association norms (De Deyne et al, 2015;De Deyne et al, 2021;Vankrunkelsven et al, 2018), but we have extended these comparisons by considering more text-based representations and similarity functions. Like De Deyne et al (2015) and Vankrunkelsven et al (2018), we suggest that free association data contain different information from text corpora, information that is apparently more relevant to predicting similarity (and other semantic phenomena).…”
Section: Representationssupporting
confidence: 89%
See 1 more Smart Citation
“…Notably, this superiority held even when comparing SWOW-PPMI-SVD to our word2vec representations, whose SGNS learning algorithm is equivalent to factorization (e.g., SVD) of a PMI matrix (Levy & Goldberg, 2014). As we noted in the first Interim discussion, this result is consistent with other work comparing text-based vectors and vectors from association norms (De Deyne et al, 2015;De Deyne et al, 2021;Vankrunkelsven et al, 2018), but we have extended these comparisons by considering more text-based representations and similarity functions. Like De Deyne et al (2015) and Vankrunkelsven et al (2018), we suggest that free association data contain different information from text corpora, information that is apparently more relevant to predicting similarity (and other semantic phenomena).…”
Section: Representationssupporting
confidence: 89%
“…There have been some attempts at comparing different similarity models, both in cognitive science (Bullinaria & Levy, 2007Burgess & Lund, 2000;Landauer & Dumais, 1997;Rohde, Gonnerman, & Plaut, 2006, Mandera et al, 2017Pereira, Gershman, Ritter, & Botvinick, 2016) and computational linguistics (e.g., Gerz, Vulić, Hill, Reichart, & Korhonen, 2016;Hill, Cho, Jean, Devin, & Bengio, 2014Ponti, Vulić, Glavaš, Mrkšić, & Korhonen, 2020;Wieting, Bansal, Gimpel, & Livescu, 2015). However, most of this work has used benchmark datasets (e.g., the TOEFL dataset and SimLex-999; Hill, Reichart, & Korhonen, 2015;Landauer & Dumais, 1997) that sample pairs of words from the entire lexicon and include a large number of rather unrelated pairs of words (for a similar point, see De Deyne, Navarro, Collell, & Perfors, 2021). For example, in SimLex-999, pairs include "wife-straw" and "ankle-window."…”
Section: Introductionmentioning
confidence: 99%
“…This aligns with recent work that indicates that semantic models based on distributional language statistics and semantic models based on word association data capture distinct and complementary information. For instance, semantic models based on word association data have been found to capture relatedness information (De Deyne, Perfors, et al, 2016) and visual and affective features of concepts (De Deyne et al, 2021;Vankrunkelsven et al, 2018). This is notable because recent research has indicated that statistical regularities in the visual domain and other visual features influence children's early lexical development (e.g., Clerkin et al, 2017;Colunga & Sims, 2017;McDonough et al, 2011).…”
Section: Child-oriented Word Associations Vs Child-directed Speechmentioning
confidence: 99%
“…Furthermore, even when DSMs and associative models are both supplemented with additional visual and/or affective information as in De Deyne et al. (2021), associative models still continue to better capture behavioral performance across relatedness/similarity judgments and tasks that rely on such information. Therefore, the data underlying network models (free association) and distributional models (natural language corpora) appear to be critical when considering their relative predictive power, and the shared variance between free association and other tasks may be confounding some of these observed patterns.…”
Section: Semantic Network: Looking Under the Hoodmentioning
confidence: 99%