Proceedings of the 13th International Conference on Computational Semantics - Short Papers 2019
DOI: 10.18653/v1/w19-0503
|View full text |Cite
|
Sign up to set email alerts
|

Distributional Semantics in the Real World: Building Word Vector Representations from a Truth-Theoretic Model

Abstract: Distributional semantics models (DSMs) are known to produce excellent representations of word meaning, which correlate with a range of behavioural data. As lexical representations, they have been said to be fundamentally different from truth-theoretic models of semantics, where meaning is defined as a correspondence relation to the world. There are two main aspects to this difference: a) DSMs are built over corpus data which may or may not reflect 'what is in the world'; b) they are built from word co-occurren… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 23 publications
1
2
0
Order By: Relevance
“…A single grid search is performed over the hyperparameter space, using 200 iterations of Bayesian optimisation 5 with early stopping. For EVA, I follow results by Kuzmenko and Herbelot (2019) showing that linguistic phenomena are not all modelled by the same feature types in the VG. The validation data is used to select the best combination of feature types for a task (attributes, relations, situational co-occurrences), running the hyperparameter optimisation over all possible combinations.…”
Section: Fasttext Trained On Vg (Ftvg)supporting
confidence: 57%
See 1 more Smart Citation
“…A single grid search is performed over the hyperparameter space, using 200 iterations of Bayesian optimisation 5 with early stopping. For EVA, I follow results by Kuzmenko and Herbelot (2019) showing that linguistic phenomena are not all modelled by the same feature types in the VG. The validation data is used to select the best combination of feature types for a task (attributes, relations, situational co-occurrences), running the hyperparameter optimisation over all possible combinations.…”
Section: Fasttext Trained On Vg (Ftvg)supporting
confidence: 57%
“…I follow the methodology introduced by Kuzmenko and Herbelot (2019), who extract information about VG instances and use it to create a 'settheoretic' vector space. The example below shows a subset of the annotation for image ID 1, after some initial pre-processing of the data.…”
Section: (Small) Datamentioning
confidence: 99%
“…They do not propose a concrete algorithm, but they discuss several challenges, and suggest that grounded data might be necessary. In this vein, Kuzmenko and Herbelot (2019) use the Visual Genome dataset (Krishna et al, 2017) to learn vector representations with logically interpretable dimensions, although these vectors are not as expressive as Copestake and Herbelot's ideal distributions.…”
Section: Logicmentioning
confidence: 99%