Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval 2013
DOI: 10.1145/2484028.2484150
|View full text |Cite
|
Sign up to set email alerts
|

Is relevance hard work?

Abstract: The judging of relevance has been a subject of study in information retrieval for a long time, especially in the creation of relevance judgments for test collections. While the criteria by which assessors' judge relevance has been intensively studied, little work has investigated the process individual assessors go through to judge the relevance of a document. In this paper, we focus on the process by which relevance is judged, and in particular, the degree of effort a user must expend to judge relevance. By b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 25 publications
(7 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…• Document/Web-page/Landing-page: Finally, the document or web-page was considered an independent variable in eight studies (effort: 𝑁 = 6; cognitive load: 𝑁 = 3). In the majority of studies, the level of document relevance was examined in relation to user effort [54,59,87,91,130]. The remaining studies which measured cognitive load, adapted the visual complexity of documents (e.g., number of elements included on the page) [23,133].…”
Section: Eye Trackingmentioning
confidence: 99%
“…• Document/Web-page/Landing-page: Finally, the document or web-page was considered an independent variable in eight studies (effort: 𝑁 = 6; cognitive load: 𝑁 = 3). In the majority of studies, the level of document relevance was examined in relation to user effort [54,59,87,91,130]. The remaining studies which measured cognitive load, adapted the visual complexity of documents (e.g., number of elements included on the page) [23,133].…”
Section: Eye Trackingmentioning
confidence: 99%
“…The review of ISR studies highlighted that self-designed questionnaires were used in three studies [16,44,48] measuring effort, and two other studies to measure mental workload [56] and cost [43]. While self-designed questionnaires were popular for the measure of effort, the NASA Task Load Index (NASA-TLX) was also identified as a dominant scale (N=8) in the measure of effort [22,54] and load [2,6,14,30,46,62]. The measure consists of 6 component scales (physical demand; mental demand; temporal demand; performance; frustration; and effort) which are weighted according to the context using a separate instrument [23].…”
Section: Rq3: Cel Measurement In Isrmentioning
confidence: 99%
“…For example, "number of queries issued", "number of documents opened", and "number of pages viewed" have been implicated as both a measure of cost [33,43] and effort [8,21,27]. Likewise, "time-on-task" was used as a metric for all CEL concepts: cost [43,62]; effort [8,48,54]; and load [13,46], implying that time-on-task is an adequate indicator of all three concepts. Similar commonalities were observed in eye tracking methods where "number of fixations" were used as metric for both effort [18] and cost [62], and "duration of fixations" as a metric for both effort [18,21] and load [55].…”
Section: Main Issues 51 Ambiguity Between Concepts and Measuresmentioning
confidence: 99%
“…Similar to their previous work in [209], on the relevance assessment effort evaluation for text retrieval, Halvey & Villa (2014) [71] conducted user experiments to investigate judgment effort and accuracy impact for image retrieval considering the topic difficulty, visual-semantic topic characteristics, and image size. In summary, the experiments have shown that the size of the images had no impact on the judgment effort, but larger images took more time for relevance assessment.…”
Section: User Aspectsmentioning
confidence: 99%
“…These findings suggest for instance that retrieval systems could be dynamically adjusted in relation to the number and the size of the images to be presented, considering the underlining difficulty of semantic characteristics of the user query. In a different direction, the outcomes from [71,209] could have positive impact on user behavior modeling, such as in [15,16], by simulating and assessing different user patterns considering different topic difficulties and semantics, which should also be incorporated into effectiveness evaluation measures.…”
Section: User Aspectsmentioning
confidence: 99%