2006
DOI: 10.1007/11736790_9
|View full text |Cite
|
Sign up to set email alerts
|

The PASCAL Recognising Textual Entailment Challenge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

7
815
0
12

Year Published

2006
2006
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 1,117 publications
(905 citation statements)
references
References 7 publications
7
815
0
12
Order By: Relevance
“…By way of example, research in Textual Entailment is supported by the availability of several annotated datasets. These resources typically consist of sets of T-H pairs manually annotated with a Boolean value to indicate whether or not H is entailed by T. In the current paper, datasets RTE1 [4], RTE-2 [5], and RTE-3 [6] are used to evaluate the impact of coreference resolution on automatic RTE. We have opted to use such publicly available resources in spite of the fact that, as a result, we had to resort to the exploitation of different data for every evaluation/application in contrast to our previous experiments where we benefited from a common corpus.…”
Section: Introductionmentioning
confidence: 99%
“…By way of example, research in Textual Entailment is supported by the availability of several annotated datasets. These resources typically consist of sets of T-H pairs manually annotated with a Boolean value to indicate whether or not H is entailed by T. In the current paper, datasets RTE1 [4], RTE-2 [5], and RTE-3 [6] are used to evaluate the impact of coreference resolution on automatic RTE. We have opted to use such publicly available resources in spite of the fact that, as a result, we had to resort to the exploitation of different data for every evaluation/application in contrast to our previous experiments where we benefited from a common corpus.…”
Section: Introductionmentioning
confidence: 99%
“…a human annotated corpus), in which implicit information does not appear. In textual entailment (Dagan et al, 2005) the measurement problem is similar, however they address this in evaluations by always making the determination based on pairs of text passages. So we can show improvement in recall by selecting meaningful queries and determining if and how reasoning improves the recall for each query, but measuring recall improvements in the KB itself is more difficult.…”
Section: Techniques For Improving Recallmentioning
confidence: 99%
“…While the most prominent forum using textual entailment is the Recognizing Textual Entailment (RTE) challenge (Dagan et al, 2005), the RTE datasets do not test the phenomena in which we are interested. For example, in order to evaluate our system's ability to determine word meaning in context, the RTE pair would have to specifically test word sense confusion by having a word's context in the hypothesis be different from the context of the premise.…”
Section: Introductionmentioning
confidence: 99%