2018
DOI: 10.31234/osf.io/q97f8
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Visual and Affective Grounding in Language and Mind

Abstract: One of the main limitations in natural language-based approaches to meaning is that they are not grounded. In this study, we evaluate how well different kinds of models account for people’s representations of both concrete and abstract concepts. The models are both unimodal (language-based only) models and multimodal distributional semantic models (which additionallyincorporate perceptual and/or affective information). The language-based models include both external (based on text corpora) and internal (derive… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

4
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(19 citation statements)
references
References 56 publications
4
15
0
Order By: Relevance
“…We expect that the multimodal model integrating linguistic, visual, and emotional information will outperform a purely linguistic model, as well as models that combine linguistic–visual and linguistic–emotional information. In addition, we expect that adding visual representations will especially be beneficial for more concrete concepts, whereas emotional information will especially be beneficial for more abstract concepts, in line with the empirical evidence reviewed above (and with initial findings from De Deyne et al, 2018). As in previous models, our work uses visual and emotional data that can only be considered as providing a static window into the embodied sensory–motor and affective states of the agent, rather than truly embodied information.…”
Section: Introductionsupporting
confidence: 73%
See 4 more Smart Citations
“…We expect that the multimodal model integrating linguistic, visual, and emotional information will outperform a purely linguistic model, as well as models that combine linguistic–visual and linguistic–emotional information. In addition, we expect that adding visual representations will especially be beneficial for more concrete concepts, whereas emotional information will especially be beneficial for more abstract concepts, in line with the empirical evidence reviewed above (and with initial findings from De Deyne et al, 2018). As in previous models, our work uses visual and emotional data that can only be considered as providing a static window into the embodied sensory–motor and affective states of the agent, rather than truly embodied information.…”
Section: Introductionsupporting
confidence: 73%
“…As mentioned in the introduction, a previous study (De Deyne et al, 2018) also examined the change in performance for distributional models of semantics, when adding experiential (i.e., visual and emotional) information. They found that including experiential information led to little or no improvement for internal language models, but had a moderate positive effect for external language models.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations