Proceedings of the 37th International ACM SIGIR Conference on Research &Amp; Development in Information Retrieval 2014
DOI: 10.1145/2600428.2609588
|View full text |Cite
|
Sign up to set email alerts
|

Characterizing multi-click search behavior and the risks and opportunities of changing results during use

Abstract: Although searchers often click on more than one result following a query, little is known about how they interact with search results after their first click. Using large scale query log analysis, we characterize what people do when they return to a result page after having visited an initial result. We find that the initial click provides insight into the searcher's subsequent behavior, with short initial dwell times suggesting more future interaction and later clicks occurring close in rank to the first. Alt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 29 publications
(36 reference statements)
0
1
0
Order By: Relevance
“…The related nding clearly argues against the independence hypothesis. Recent studies on anchoring and adjustment in relevance estimation [88,126] prove that the human annotators are likely to assign di erent relevance labels to a document, depending on the quality of the last document they had judged for the same query. Regarding consistency, it is well known today that TREC and CLEF style experiments are generally based on expert assessments seen as objective, while real-life IR settings are based on real users for whom assessments are seen as subjective [19,60] and several contextual factors a ect the users when judging document relevance.…”
Section: Relevancementioning
confidence: 99%
“…The related nding clearly argues against the independence hypothesis. Recent studies on anchoring and adjustment in relevance estimation [88,126] prove that the human annotators are likely to assign di erent relevance labels to a document, depending on the quality of the last document they had judged for the same query. Regarding consistency, it is well known today that TREC and CLEF style experiments are generally based on expert assessments seen as objective, while real-life IR settings are based on real users for whom assessments are seen as subjective [19,60] and several contextual factors a ect the users when judging document relevance.…”
Section: Relevancementioning
confidence: 99%