2004
DOI: 10.1007/978-3-540-30222-3_46
|View full text |Cite
|
Sign up to set email alerts
|

The Multiple Language Question Answering Track at CLEF 2003

Abstract: This paper reports on the pilot question answering track that was carried out within the CLEF initiative this year. The track was divided into monolingual and bilingual tasks: monolingual systems were evaluated within the frame of three non-English European languages, Dutch, Italian and Spanish, while in the crosslanguage tasks an English document collection constituted the target corpus for Italian, Spanish, Dutch, French and German queries. Participants were given 200 questions for each task, and were allowe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0

Year Published

2004
2004
2015
2015

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 48 publications
(31 citation statements)
references
References 5 publications
0
31
0
Order By: Relevance
“…It is the corpus that is used for Dutch in the QA task of CLEF. We selected two question types that are frequent in the CLEF QA question set 3 (Magnini et al, 2003): capital-of an soccer-player-club. These are binary relations, respectively between a location (e.g.…”
Section: Methodsmentioning
confidence: 99%
“…It is the corpus that is used for Dutch in the QA task of CLEF. We selected two question types that are frequent in the CLEF QA question set 3 (Magnini et al, 2003): capital-of an soccer-player-club. These are binary relations, respectively between a location (e.g.…”
Section: Methodsmentioning
confidence: 99%
“…Thus, for a specific query, RR is the reciprocal of the rank where the first correct/relevant result is given. Although this measure is mostly used in search tasks when there is only one correct answer (Kantor and Voorhees, 2000), others used it for assessing the performance of query suggestions (Meij et al, 2009;Albakour et al, 2011) as well as ranking algorithms in particular (Damljanovic et al, 2010) and IR systems (Voorhees, 1999(Voorhees, , 2003Magnini et al, 2003) in general.…”
Section: R-precision (R-prec)mentioning
confidence: 99%
“…Research in QA has increased as a result of the inclusion of QA evaluations as part of the Text Retrieval Conference (TREC) 1 in 1999, and recently [5] in Multilingual Question Answering as part of the Cross Language Evaluation Forum (CLEF) 2 .…”
Section: Introductionmentioning
confidence: 99%