Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) 2016
DOI: 10.18653/v1/p16-2041
|View full text |Cite
|
Sign up to set email alerts
|

Annotating Relation Inference in Context via Question Answering

Abstract: We present a new annotation method for collecting data on relation inference in context. We convert the inference task to one of simple factoid question answering, allowing us to easily scale up to 16,000 high-quality examples. Our method corrects a major bias in previous evaluations, making our dataset much more realistic.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
49
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 30 publications
(58 citation statements)
references
References 18 publications
(20 reference statements)
0
49
0
Order By: Relevance
“…Chen et al (2016) revealed problems with the CNN/DailyMail dataset (Hermann et al, 2015) which resulted from apply-ing automatic tools for annotation. Levy and Dagan (2016) showed that a relation inference benchmark (Zeichner et al, 2012) is severely biased towards distributional methods, since it was created using DIRT (Lin and Pantel, 2001). Schwartz et al (2017) and Cai et al (2017) showed that certain biases are prevalent in the ROC stories cloze task (Mostafazadeh et al, 2016), which allow models trained on the endings alone, and not the story prefix, to yield state-of-the-art results.…”
Section: Discussionmentioning
confidence: 99%
“…Chen et al (2016) revealed problems with the CNN/DailyMail dataset (Hermann et al, 2015) which resulted from apply-ing automatic tools for annotation. Levy and Dagan (2016) showed that a relation inference benchmark (Zeichner et al, 2012) is severely biased towards distributional methods, since it was created using DIRT (Lin and Pantel, 2001). Schwartz et al (2017) and Cai et al (2017) showed that certain biases are prevalent in the ROC stories cloze task (Mostafazadeh et al, 2016), which allow models trained on the endings alone, and not the story prefix, to yield state-of-the-art results.…”
Section: Discussionmentioning
confidence: 99%
“…Our work is more closely related to the dataset by Levy and Dagan (2016), who frame relation entailment as the task of judging the appropriateness of candidate answers. Their hypothesis is that an answer is only appropriate if it entails the predicate of the question.…”
Section: Meta Rules and Implicative Verbsmentioning
confidence: 99%
“…Lexical ontologies, such as WordNet (as used by Levy and Dagan, 2016) likewise lack this connection between relations and types. Moreover, relations between real-world entities are more often events than relations between common nouns.…”
Section: Meta Rules and Implicative Verbsmentioning
confidence: 99%
“…Entailment Detection Evaluation. For the entailment detection task, we evaluate on Levy/Holt's dataset (Levy and Dagan, 2016;. Each example in the dataset contains a pair of triples where the entities are the same (possibly in the reverse order), but the relations are different.…”
Section: Evaluation Datasetsmentioning
confidence: 99%
“…In order to have a fair comparison with Berant's ILP method, we first test a set of rule-based constraints proposed by them (Berant et al, 2011). We also apply the lemma baseline heuristic process of Levy and Dagan (2016) before testing the methods. Figure 3 shows the precision-recall curve of all the methods in both local (A) and global (B) settings.…”
Section: Entailment Scores Based On Link Predictionmentioning
confidence: 99%