Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2006
DOI: 10.1145/1148170.1148238
|View full text |Cite
|
Sign up to set email alerts
|

What makes a query difficult?

Abstract: This work tries to answer the question of what makes a query difficult. It addresses a novel model that captures the main components of a topic and the relationship between those components and topic difficulty. The three components of a topic are the textual expression describing the information need (the query or queries), the set of documents relevant to the topic (the Qrels), and the entire collection of documents. We show experimentally that topic difficulty strongly depends on the distances between these… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
169
0
1

Year Published

2008
2008
2017
2017

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 174 publications
(172 citation statements)
references
References 10 publications
2
169
0
1
Order By: Relevance
“…The correlation by the Spearman and Kendall functions yields similar results. While being indicative of a positive trend, and not far from previous results in query performance [5], observed correlation values still leave room for further elaboration and refinements of the proposed predictor and alternative ones, as well as the NG metric itself, in order to match the best findings in query performance [7]. The second experiment consists of measuring final performance improvements when dynamic weights are introduced in a user-based CF.…”
Section: Experimental Workmentioning
confidence: 87%
See 1 more Smart Citation
“…The correlation by the Spearman and Kendall functions yields similar results. While being indicative of a positive trend, and not far from previous results in query performance [5], observed correlation values still leave room for further elaboration and refinements of the proposed predictor and alternative ones, as well as the NG metric itself, in order to match the best findings in query performance [7]. The second experiment consists of measuring final performance improvements when dynamic weights are introduced in a user-based CF.…”
Section: Experimental Workmentioning
confidence: 87%
“…The prediction methods documented in the literature use a variety of available data as a basis for prediction, such as a query, its properties with respect to the retrieval space [7], the output of the retrieval system [5], or the output of other systems [3]. According to whether or not the retrieval results are used in the prediction, the methods can be classified into pre-and post-retrieval approaches [10].…”
Section: Introductionmentioning
confidence: 99%
“…[7]) or with the average performance of several models (e.g. [2]). In this study, we use the latter.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…Carmel et al [15] attempted to answer the question of what makes a query difficult and discussed the problem thoroughly. They found that the distance measured by the Jensen-Shannon divergence between the retrieved document set and the collection significantly correlated with average precision.…”
Section: Category I: Pre-retrieval Predictorsmentioning
confidence: 99%