2015
DOI: 10.1017/s1351324915000108
|View full text |Cite
|
Sign up to set email alerts
|

Textual entailment graphs

Abstract: In this work, we present a novel type of graphs for natural language processing (NLP), namely textual entailment graphs (TEGs). We describe the complete methodology we developed for the construction of such graphs and provide some baselines for this task by evaluating relevant state-of-the-art technology. We situate our research in the context of text exploration, since it was motivated by joint work with industrial partners in the text analytics area. Accordingly, we present our motivating scenario and the fi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 21 publications
0
12
0
Order By: Relevance
“…In order to verify the applicability of our method to a data set other than SICK, and to compare our results with several other existing systems, we used the EXCITEMENT english development data set (Kotlerman et al 2015), which is freely available from the EOP web site 3 . EXCITEMENT contains pieces of email feedback sent by the customers of a railway company where they state reasons for satisfaction or dissatisfaction with the company.…”
Section: Data Setsmentioning
confidence: 99%
“…In order to verify the applicability of our method to a data set other than SICK, and to compare our results with several other existing systems, we used the EXCITEMENT english development data set (Kotlerman et al 2015), which is freely available from the EOP web site 3 . EXCITEMENT contains pieces of email feedback sent by the customers of a railway company where they state reasons for satisfaction or dissatisfaction with the company.…”
Section: Data Setsmentioning
confidence: 99%
“…This task corresponds to RTE-3, and the main difference to Evaluation 1 is that these pairs come from real-world interactions and were produced by native speakers. All T-H pairs are sampled from application gold data which were manually constructed on the basis of anonymized customer interactions for German; Kotlerman et al (2015) for English and Italian 2 ). The sets are fairly large (5300 pairs for English, 1700 for Italian, 1274 for German), and were sampled to be balanced.…”
Section: Evaluation 2: T-h Pairs From Application Datamentioning
confidence: 99%
“…Our architecture is an encoder-decoder model where our generalized graph attention model and Con-vKB (Nguyen et al, 2018) play the roles of an encoder and decoder, respectively. Moreover, this method can be extended for learning effective embeddings for Textual Entailment Graphs (KOTLERMAN et al, 2015), where global learning has proven effective in the past as shown by (Berant et al, 2015) and (Berant et al, 2010).…”
Section: Introductionmentioning
confidence: 99%