Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 1998
DOI: 10.1145/290941.291009
|View full text |Cite
|
Sign up to set email alerts
|

Efficient construction of large test collections

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
102
0

Year Published

2001
2001
2018
2018

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 159 publications
(102 citation statements)
references
References 8 publications
0
102
0
Order By: Relevance
“…Several alternative approaches to the original pooling method have been suggested in order to judge more relevant documents at the same pool depth, e.g. Zobel [21] and Cormack et al [7].…”
Section: Related Workmentioning
confidence: 99%
“…Several alternative approaches to the original pooling method have been suggested in order to judge more relevant documents at the same pool depth, e.g. Zobel [21] and Cormack et al [7].…”
Section: Related Workmentioning
confidence: 99%
“…Cormack et al [10] proposed two techniques: iterative searching and judging (ISJ) and move-to-front pooling. With ISJ, relevance assessors perform multiple searches while judging documents for relevance, in order to try and recover as many relevant documents as possible.…”
Section: Related Workmentioning
confidence: 99%
“…Several methods have been shown to locate most relevant documents or to estimate conventional measures using a fraction of the currently judged documents; an assessment regime could apply these techniques within the current pooling "budget" and explore a much deeper pool. One such method that we have examined is move-to-front pooling [10]. If we judge the number of documents that would have been judged in a depth-50 pool, but using the move-to-front approach, we would recover 79% of the relevant documents found in the official pool while only judging 48% of the officially-judged nonrelevant documents.…”
Section: Toward Large Reusable Test Collectionsmentioning
confidence: 99%
“…The effects of incomplete relevance assessments, imperfect judgements, potential biases in the relevance pool and the effects of assessor domain expertise in relation to the topic have been investigated in various studies (Cuadra, 1967;Zobel, 1998;Buckley and Voorhees, 2004;Yilmaz and Aslam, 2006;Büttcher et al, 2007;Bailey et al, 2008;Kinney et al, 2008). Approaches to ensure completeness of relevance assessments include using the results from searches conducted manually to generate the pools and supplementing pools with relevant documents found by manually searching the document collection with an IR system, known as Interactive Search and Judge or ISJ (Cormack et al, 1998) Generating relevance assessment is often highly timeconsuming and labour intensive. This often leads to a bottleneck in the creation of test collections.…”
Section: Relevance Assessmentsmentioning
confidence: 99%