2006
DOI: 10.1007/11878773_36
|View full text |Cite
|
Sign up to set email alerts
|

Overview of the CLEF 2005 Multilingual Question Answering Track

Abstract: The fifth QA campaign at CLEF [1], having its first edition in 2003, offered not only a main task but an Answer Validation Exercise (AVE) [2], which continued last year's pilot, and a new pilot: the Question Answering on Speech Transcripts (QAST) [3, 15]. The main task was characterized by the focus on cross-linguality, while covering as many European languages as possible. As novelty, some QA pairs were grouped in clusters. Every cluster was characterized by a topic (not given to participants). The questions … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0
1

Year Published

2006
2006
2011
2011

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 53 publications
(24 citation statements)
references
References 9 publications
0
23
0
1
Order By: Relevance
“…Several QA reports [6,14] indicate that the translation errors cause an important drop in accuracy for cross-language tasks with respect to the monolingual exercises. Based on this fact, we evaluated the impact of our methods by measuring the fall of accuracy 6 in the answer extraction caused by the question translation in relation to the Spanish monolingual QA task.…”
Section: Resultsmentioning
confidence: 99%
“…Several QA reports [6,14] indicate that the translation errors cause an important drop in accuracy for cross-language tasks with respect to the monolingual exercises. Based on this fact, we evaluated the impact of our methods by measuring the fall of accuracy 6 in the answer extraction caused by the question translation in relation to the Spanish monolingual QA task.…”
Section: Resultsmentioning
confidence: 99%
“…The motivation for this experiment comes from the study of the QA task in the CLEF 2005, where 75% of the questions in Spanish were factoids [7]. The responses to factoid questions contain entities (e.g.…”
Section: Motivationmentioning
confidence: 99%
“…Participant systems must return YES or NO for each hypothesis-text pair to indicate if the text entails the hypothesis or not (i.e. the answer is correct according to the text).The paper shows a participant system that only uses information about entities (numeric expressions, temporal expressions and named entities) in order to study the importance of entities in answer validation.The motivation for this experiment comes from the study of the QA task in the CLEF 2005, where 75% of the questions in Spanish were factoids [7]. The responses to factoid questions contain entities (e.g.…”
mentioning
confidence: 99%
“…We used the Confidence Weighted Score (CWS) to select the answer to be returned to the system, relying on the fact that in 2005 our system was the one returning the best values for CWS [7]. For each candidate answer we calculated the CWS by dividing the number of strategies giving the same answer by the total number of strategies (5), multiplied for other measures depending on the number of returned passages (n p /N , where N is the maximum number of passages that can be returned by the PR module and n p is the number of passages actually returned) and the averaged passage weight.…”
Section: Answer Extractionmentioning
confidence: 99%