2016
DOI: 10.48550/arxiv.1601.04908
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Graded Entailment for Compositional Distributional Semantics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
25
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
2
2

Relationship

4
4

Authors

Journals

citations
Cited by 16 publications
(26 citation statements)
references
References 0 publications
1
25
0
Order By: Relevance
“…The data sets used in research on this topic tend to be either fully formal, focusing on logic instead of natural language Allamanis et al, 2016), or fully natural, as is the case for manually annotated data sets of English sentence pairs such as SICK (Marelli et al, 2014) or SNLI (Bowman et al, 2015a). Moreover, entailment recognition models are often endowed with functionality reflecting pre-established linguistic or semantic regularities of the data (Bankova et al, 2016;Serafini and Garcez, 2016;Sadrzadeh et al, 2018). Recently, Shen et al (2018) showed that recurrent networks can learn to recognize logical inference relations if they are extended with a bias towards modelling hierarchical structures.…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…The data sets used in research on this topic tend to be either fully formal, focusing on logic instead of natural language Allamanis et al, 2016), or fully natural, as is the case for manually annotated data sets of English sentence pairs such as SICK (Marelli et al, 2014) or SNLI (Bowman et al, 2015a). Moreover, entailment recognition models are often endowed with functionality reflecting pre-established linguistic or semantic regularities of the data (Bankova et al, 2016;Serafini and Garcez, 2016;Sadrzadeh et al, 2018). Recently, Shen et al (2018) showed that recurrent networks can learn to recognize logical inference relations if they are extended with a bias towards modelling hierarchical structures.…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…In addition, our DisCoCat-based QNLP framework is naturally generalisable to accommodate mapping sentences to quantum circuits involving mixed states and quantum channels. This is useful as mixed states allow for modelling lexical entailement and ambiguity [55,56].…”
Section: To Clearly Show the Decrease In Training And Test Errors As ...mentioning
confidence: 99%
“…It has been extensively studied, both axiomatically [6,7,9,14] and concretely [10,15,16,17]. Recently, applications of the CPM construction in the context of compositional distributional models of meaning [4,5,15,17] have prompted renewed interest on iterated CPM constructions [3], with the discovery of new features due to their additional degrees of freedom [19].…”
Section: Introductionmentioning
confidence: 99%