2006
DOI: 10.1007/11878773_30
|View full text |Cite
|
Sign up to set email alerts
|

Overview of the CLEF 2005 Interactive Track

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2006
2006
2015
2015

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 7 publications
0
6
0
Order By: Relevance
“…As the quality of machine translation improved, the focus of CLIR user studies expanded from merely enabling users to find documents (e.g., for subsequent human translation) to also support information use (e.g., by translating the full text). The question answering task in the interactive track of the Cross-Language Evaluation Forum (iCLEF) is an example of that more comprehensive perspective [8]. The studies reported in this paper continue to broaden the perspective by adding a focus on complex tasks with live multimedia content.…”
Section: User-centered Evaluation Of Clirmentioning
confidence: 89%
“…As the quality of machine translation improved, the focus of CLIR user studies expanded from merely enabling users to find documents (e.g., for subsequent human translation) to also support information use (e.g., by translating the full text). The question answering task in the interactive track of the Cross-Language Evaluation Forum (iCLEF) is an example of that more comprehensive perspective [8]. The studies reported in this paper continue to broaden the perspective by adding a focus on complex tasks with live multimedia content.…”
Section: User-centered Evaluation Of Clirmentioning
confidence: 89%
“…For example, interactive CLEF (Gonzalo and Oard, 2004;Gonzalo et al, 2006) used a minimum of eight subjects while the Interactive Track at TREC-9 and TREC-6 used 16 and 20 searchers respectively (Hersh and Over, 1999;Over, 1997). The number of subjects directly influences the amount of resources required in terms of cost, time and effort.…”
Section: Recruitment Of Subjectsmentioning
confidence: 99%
“…Researchers used one or more of these types of tasks depending on the research goal/questions. Search tasks/topics were used at TREC and CLEF interactive tracks (Over, 1997;Gonzalo et al, 2006); while Jose et al (1998);Borlund (2000); White et al (2007) and Petrelli (2008) adopted Borlund's simulated work task. In the same context, another approach to achieve realism and motivate and engage the recruited subjects in the evaluation is to let them choose the search tasks from a pool of available tasks, for instance, in different domains (Spink, 2002;Su, 2003;White et al, 2007;Joho et al, 2008).…”
Section: Tasks and Topicsmentioning
confidence: 99%
“…In respect of classical measures, we have used the "accuracy" measure (Gonzalo et al, 2006), the fraction of questions in which the user obtained the information searched within a time limit of three minutes (when an user did not find the correct question within the three minutes allowed per question, the question was considered as not resolved similar to the criteria used in iCLEF 2004 and 2005). Here it has only been taken into account if the query results are correct (anaphor and ellipsis resolution has not been evaluated with this measure).…”
Section: Series Of Context Questionsmentioning
confidence: 99%