Proceedings of the Eight Workshop on Cognitive Aspects of Computational Language Learning and Processing 2018
DOI: 10.18653/v1/w18-2806
|View full text |Cite
|
Sign up to set email alerts
|

Abstract: We present a novel methodology involving mappings between different modes of semantic representations. We propose distributional semantic models as a mechanism for representing the kind of world knowledge inherent in the system of abstract symbols characteristic of a sophisticated community of language users. Then, motivated by insight from ecological psychology, we describe a model approximating affordances, by which we mean a language learner's direct perception of opportunities for action in an environment.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…Sometimes, affordance datasets leverage multimodal settings such as images (Myers et al, 2015), or 3D models and environments (Suglia et al, 2021;Mandikal and Grauman, 2021;Nagarajan and Grauman, 2020), but require annotations for every object. In contrast, our model learns affordances in an unsupervised manner, and unlike Fulda et al (2017), Loureiro and Jorge (2018), McGregor and Lim (2018), and Persiani and Hellström (2019) which extract affordance structure automatically from word embeddings alone, our model learns from interacting with objects in a 3D space, grounding its representations to cause-and-effect pairs of physical forces and object motion.…”
Section: Affordances In Language Technologymentioning
confidence: 99%
“…Sometimes, affordance datasets leverage multimodal settings such as images (Myers et al, 2015), or 3D models and environments (Suglia et al, 2021;Mandikal and Grauman, 2021;Nagarajan and Grauman, 2020), but require annotations for every object. In contrast, our model learns affordances in an unsupervised manner, and unlike Fulda et al (2017), Loureiro and Jorge (2018), McGregor and Lim (2018), and Persiani and Hellström (2019) which extract affordance structure automatically from word embeddings alone, our model learns from interacting with objects in a 3D space, grounding its representations to cause-and-effect pairs of physical forces and object motion.…”
Section: Affordances In Language Technologymentioning
confidence: 99%
“…Sometimes, affordance datasets leverage multimodal settings such as images , or 3D models and environments (Suglia et al, 2021;Mandikal and Grauman, 2021;Nagarajan and Grauman, 2020), but require annotations for every object. In contrast, our model learns affordances in an unsupervised manner, and unlike Fulda et al (2017), Loureiro and Jorge (2018), McGregor and Lim (2018), and Persiani and Hellström (2019) which extract affordance structure automatically from word embeddings alone, our model learns from interacting with objects in a 3D space, grounding its representations to cause-and-effect pairs of physical forces and object motion.…”
Section: Affordances In Language Technologymentioning
confidence: 99%
“…In what follows we present an overview of the phenomenon followed by a preliminary proposal for a context-sensitive framework for interpreting predicate-object coercions. Our methodology is inspired by theoretical insight into environmental afforandances, and in this regard is in line with technical applications described in the area of image labelling by McGregor and Lim (2018). Motivated by an analysis of some of the shortcomings of a more general probabilistic approach, and also by a number of previous approaches to interpreting semantic coercion, we outline a model grounded in the distributional semantic modelling paradigm (Clark, 2015).…”
Section: Introductionmentioning
confidence: 99%