2008
DOI: 10.1007/978-3-540-85760-0_50
|View full text |Cite
|
Sign up to set email alerts
|

Using Recognizing Textual Entailment as a Core Engine for Answer Validation

Abstract: Abstract. This paper is about our approach to answer validation, which centered by a Recognizing Textual Entailment (RTE) core engine. We first combined the question and the answer into Hypothesis (H) and view the document as Text (T); then, we used our RTE system to check whether the entailment relation holds between them. Our system was evaluated on the Answer Validation Exercise (AVE) task and achieved f-measures of 0.46 and 0.55 for two submission runs, which both outperformed others' results for the Engli… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…The task of identifying answer quality has been studied by many researchers in the field of Question Answering. Many methods have been proposed: web redundancy information (Magnini et al, 2002), non-textual features (Jeon et al, 2006), textual entailment (Wang and Neumann, 2007), syntactic features (Grundström and Nugues, 2014). However, most of these works used independent dataset and evaluation metrics; thus it is difficult to compare the results of these methods.…”
Section: Introductionmentioning
confidence: 99%
“…The task of identifying answer quality has been studied by many researchers in the field of Question Answering. Many methods have been proposed: web redundancy information (Magnini et al, 2002), non-textual features (Jeon et al, 2006), textual entailment (Wang and Neumann, 2007), syntactic features (Grundström and Nugues, 2014). However, most of these works used independent dataset and evaluation metrics; thus it is difficult to compare the results of these methods.…”
Section: Introductionmentioning
confidence: 99%
“…Building up the broader concept of textual entailment [74][75][76], earlier work studying Bar Exams treated 'the relationship between the question and the multiple-choice answers as a form of textual entailment' [77] where the ability to identify wrong answers (non-entailment) is differentiated from the ability to identify the correct answer (entailment). Intuitively, this is related to the classic test taking strategy of eliminating clearly erroneous answers.…”
Section: (Iii) Non-entailment Mbe Resultsmentioning
confidence: 99%