Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2008
DOI: 10.1145/1390334.1390393
|View full text |Cite
|
Sign up to set email alerts
|

Learning query intent from regularized click graphs

Abstract: This work presents the use of click graphs in improving query intent classifiers, which are critical if vertical search and general-purpose search services are to be offered in a unified user interface. Previous works on query classification have primarily focused on improving feature representation of queries, e.g., by augmenting queries with search engine results. In this work, we investigate a completely orthogonal approach -instead of enriching feature representation, we aim at drastically increasing the a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
170
2

Year Published

2011
2011
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 264 publications
(172 citation statements)
references
References 13 publications
0
170
2
Order By: Relevance
“…Existing methods for vertical selection and presentation use machine learning to combine different types of predictive evidence: query-string features [2,4,5,19,23], vertical query-log features [2,4,5,11,23], vertical content features [2,4,5,11], and implicit feedback features from previous presentations of the vertical [11,23]. Model tuning and evaluation is typically done with respect to editorial relevance judgements [2,3,4,5,19] or, in a production environment, with respect to user-generated clicks and skips [11,23]. In the first case, users do not actively participate in the evaluation.…”
Section: Related Work 21 Aggregated Searchmentioning
confidence: 99%
See 2 more Smart Citations
“…Existing methods for vertical selection and presentation use machine learning to combine different types of predictive evidence: query-string features [2,4,5,19,23], vertical query-log features [2,4,5,11,23], vertical content features [2,4,5,11], and implicit feedback features from previous presentations of the vertical [11,23]. Model tuning and evaluation is typically done with respect to editorial relevance judgements [2,3,4,5,19] or, in a production environment, with respect to user-generated clicks and skips [11,23]. In the first case, users do not actively participate in the evaluation.…”
Section: Related Work 21 Aggregated Searchmentioning
confidence: 99%
“…Most published research in aggregated search has focused on automatic methods for predicting which verticals to present (vertical selection) [4,5,11,19] and where in the Web results to present them (vertical presentation) [2,3,23]. Evaluation of these systems has typically been conducted by using editorial vertical relevance judgements as the gold standard [2,3,4,5,19], or by using user-generated clicks on vertical results as a proxy for relevance [11,23].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Li et al [10] classified queries into two classes of vertical intent (product and job) and evaluated based on precision and recall for each class independently. Diaz [5] focused on predicting when to display news results (always displayed above Web results) and evaluated in terms of correctly predicted clicks and skips.…”
Section: Related Workmentioning
confidence: 99%
“…Most prior work focuses on vertical selection-the task of predicting which verticals (if any) are relevant to a query [5,10,1,6,2]. The second task of deciding where in the Web results to embed the vertical results has received less attention.…”
Section: Introductionmentioning
confidence: 99%