2015
DOI: 10.1162/tacl_a_00145
|View full text |Cite
|
Sign up to set email alerts
|

Deriving Boolean structures from distributional vectors

Abstract: Corpus-based distributional semantic models capture degrees of semantic relatedness among the words of very large vocabularies, but have problems with logical phenomena such as entailment, that are instead elegantly handled by model-theoretic approaches, which, in turn, do not scale up. We combine the advantages of the two views by inducing a mapping from distributional vectors of words (or sentences) into a Boolean structure of the kind in which natural language terms are assumed to denote. We evaluate this B… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
38
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(38 citation statements)
references
References 25 publications
0
38
0
Order By: Relevance
“…The semi-supervised model of Kruszewski et al (2015) also models entailment in a vector space, but they use a discrete vector space. They train a mapping from distributional semantic vectors to Boolean vectors such that feature inclusion respects a training set of entailment relations.…”
Section: Related Workmentioning
confidence: 99%
“…The semi-supervised model of Kruszewski et al (2015) also models entailment in a vector space, but they use a discrete vector space. They train a mapping from distributional semantic vectors to Boolean vectors such that feature inclusion respects a training set of entailment relations.…”
Section: Related Workmentioning
confidence: 99%
“…Indeed, showed that two existing lexical entailment models fail to account for similarity between the antecedent and consequent, and conclude that such models are only learning to predict prototypicality: that is, they predict that cat entails animal because animal is usually entailed, and therefore will also predict that sofa entails animal. Yet it remains unclear why such models make for such strong baselines (Weeds et al, 2014;Kruszewski et al, 2015;.…”
Section: Introductionmentioning
confidence: 99%
“…Research on lexical entailment using distributional semantics has now spanned more than a decade, and has been approached using both unsupervised (Weeds et al, 2004;Kotlerman et al, 2010;Lenci and Benotto, 2012;Santus, 2013) and supervised techniques (Baroni et al, 2012;Fu et al, 2014;Roller et al, 2014;Weeds et al, 2014;Kruszewski et al, 2015;Turney and Mohammad, 2015;Santus et al, 2016). Most of the work in unsupervised methods is based on the Distributional Inclusion Hypothesis (Weeds et al, 2004;Zhitomirsky-Geffet and Dagan, 2005), which states that the contexts in which a hypernym appear should be a superset over its hyponyms' contexts.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations