2007
DOI: 10.1007/978-3-540-74999-8_31
|View full text |Cite
|
Sign up to set email alerts
|

Overview of the CLEF 2006 Multilingual Question Answering Track

Abstract: The general aim of the third CLEF Multilingual Question Answering Track was to set up a common and replicable evaluation framework to test both monolingual and cross-language Question Answering (QA) systems that process queries and documents in several European languages. Nine target languages and ten source languages were exploited to enact 8 monolingual and 73 cross-language tasks. Twenty-four groups participated in the exercise.Overall results showed a general increase in performance in comparison to last y… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
31
0

Year Published

2007
2007
2019
2019

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 43 publications
(33 citation statements)
references
References 7 publications
1
31
0
Order By: Relevance
“…Last year we promoted an architecture based on Textual Entailment trying to bring research groups working on machine learning to Question Answering. Thus, we provided the hypothesis already built from the questions and answers [6] (see Figure 2). Then, the exercise was similar to the RTE Challenges [1] [2] [3], where systems must decide if there is entailment or not between the supporting text and the hypothesis.…”
Section: Question Answering Trackmentioning
confidence: 99%
See 2 more Smart Citations
“…Last year we promoted an architecture based on Textual Entailment trying to bring research groups working on machine learning to Question Answering. Thus, we provided the hypothesis already built from the questions and answers [6] (see Figure 2). Then, the exercise was similar to the RTE Challenges [1] [2] [3], where systems must decide if there is entailment or not between the supporting text and the hypothesis.…”
Section: Question Answering Trackmentioning
confidence: 99%
“…Development collections were obtained from the QA@CLEF 2006 [6] main track questions and answers. Table 1 shows the number of questions and answers for each language together with the percentage that these answers represent over the number of answers initially available, and the number of answers with VALIDATED and REJECTED values.…”
Section: Development Collectionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Multilingual tasks such as IR and Question Answering (QA) have been recognized as an important issue in the on-line information access, as it was revealed in the the Cross-Language Evaluation Forum (CLEF) 2006 [6].…”
Section: Introductionmentioning
confidence: 99%
“…Last year, a new Romanian-to-English (RO-EN) cross-lingual QA task was organised for the first time within the context of the CLEF campaign [10], and it consisted of retrieving answers to Romanian questions from a collection of English documents. This year's task [6] was similarly organised, with the exception that all questions were clustered in classes related to the same topic, some of which even contain anaphoric references to other questions from the same topic class, or to their answers.…”
Section: Introductionmentioning
confidence: 99%