2008
DOI: 10.1007/978-3-540-85760-0_27
|View full text |Cite
|
Sign up to set email alerts
|

Overview of the CLEF 2007 Multilingual Question Answering Track

Abstract: The general aim of the third CLEF Multilingual Question Answering Track was to set up a common and replicable evaluation framework to test both monolingual and cross-language Question Answering (QA) systems that process queries and documents in several European languages. Nine target languages and ten source languages were exploited to enact 8 monolingual and 73 cross-language tasks. Twenty-four groups participated in the exercise.Overall results showed a general increase in performance in comparison to last y… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0
1

Year Published

2008
2008
2020
2020

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 37 publications
(22 citation statements)
references
References 7 publications
0
21
0
1
Order By: Relevance
“…The questions used to test the system were the 200 questions used at QA@CLEF 2007 for the PT-PT track (questions and answers in Portuguese) [5]. We are aware that it is questionable to use the same set of questions in the error analysis and in a subsequent evaluation, but creating a new set of questions is a very time-consuming task.…”
Section: Evaluation and Discussion Of The Resultsmentioning
confidence: 99%
“…The questions used to test the system were the 200 questions used at QA@CLEF 2007 for the PT-PT track (questions and answers in Portuguese) [5]. We are aware that it is questionable to use the same set of questions in the error analysis and in a subsequent evaluation, but creating a new set of questions is a very time-consuming task.…”
Section: Evaluation and Discussion Of The Resultsmentioning
confidence: 99%
“…In this edition, questions were grouped by topic [4]. The first question of a topic was self contained in the sense that there is no need of information outside the question to answer it.…”
Section: Test Collectionsmentioning
confidence: 99%
“…The run we submitted for the Romanian to English cross-lingual QA task achieved an overall accuracy of 14%, the best score achieved among systems with English as target language [6]. An in-depth analysis of the results at different stages in the QA process has revealed a number of future system improvement directions.…”
Section: Discussionmentioning
confidence: 99%
“…This year, the QA@CLEF main task distinguishes among four question types: factoid, definition, list and temporally restricted questions [6]. As temporal restrictions can constrain any question type, we first detect whether the question has the type factoid, definition or list, and then search for temporal restrictions.…”
Section: D) Inferring the Question Typementioning
confidence: 99%
See 1 more Smart Citation