2013
DOI: 10.1108/00220411311295324
|View full text |Cite
|
Sign up to set email alerts
|

Search result list evaluation versus document evaluation: similarities and differences

Abstract: Purpose-The purpose of this study is to compare the evaluation of search result lists and documents, in particular evaluation criteria, elements, association between criteria and elements, pre/post and evaluation activities, and the time spent on evaluation. Design/methodology/approach-The study analyzed the data collected from 31 general users through prequestionnaires, think aloud protocols and logs, and post questionnaires. Types of evaluation criteria, elements, associations between criteria and elements, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2015
2015
2018
2018

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 47 publications
(72 reference statements)
0
6
0
Order By: Relevance
“…Users benefit from the additional information to judge the relevance and usefulness of the search results. Additional information about the retrieved items supports users to evaluate search results more efficiently and effectively, and the result evaluation with more information would influence post‐query activities including query reformulation (Xie & Benoit, ). As users can utilize the additional information when judging the relevance or usefulness of the search results, they would be less likely to further explore additional results to check if the search results would be relevant (precision) and/or sufficient (recall) for their given task.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Users benefit from the additional information to judge the relevance and usefulness of the search results. Additional information about the retrieved items supports users to evaluate search results more efficiently and effectively, and the result evaluation with more information would influence post‐query activities including query reformulation (Xie & Benoit, ). As users can utilize the additional information when judging the relevance or usefulness of the search results, they would be less likely to further explore additional results to check if the search results would be relevant (precision) and/or sufficient (recall) for their given task.…”
Section: Discussionmentioning
confidence: 99%
“…According to Mu et al (2014), users spend a significant amount of time interacting with these extra components in MeshMed, which eventually led to efficient search strategies with fewer query reformulation attempts. Additional information about the retrieved items supports users to evaluate search results more efficiently and effectively, and the result evaluation with more information would influence post-query activities including query reformulation (Xie & Benoit, 2013). Users with limited knowledge of medical terms tend to select incorrect terms and to adopt a completely new query based on trial and error (Vanopstal et al, 2013).…”
Section: Fewer Query Reformulations In Meshmed Than In Simplemedmentioning
confidence: 99%
“…In applying evaluating tactics, users focus on identifying multiple dimensions of criteria to evaluate search results or individual items, and the systems emphasize generating related information to support users' evaluation process (Xie & Benoit, 2013). The provision of document metadata, search results with short summaries, and categorized overviews facilitates users' efficient involvement in evaluating tactics (Kules & Shneiderman, 2008;Makri, Blandford, & Cox, 2008a).…”
Section: System Support and User Involvement For Different Types Of Smentioning
confidence: 99%
“…To satisfy users’ information needs, various search engines, online databases, digital libraries are created and different search services are provided to offer approaches for information retrieval. In the entire information retrieval process, evaluation, including both search result evaluation and individual document evaluation, is one of the key activities (Xie & Benoit, ). Xie and Benoit () compared these two types of evaluation behaviors in relation to evaluation criteria, elements, evaluation activities, and so on.…”
Section: Introductionmentioning
confidence: 99%
“…In the entire information retrieval process, evaluation, including both search result evaluation and individual document evaluation, is one of the key activities (Xie & Benoit, ). Xie and Benoit () compared these two types of evaluation behaviors in relation to evaluation criteria, elements, evaluation activities, and so on. They found that although there are differences between these two types of evaluation in criteria and element aspects, they are interrelated to each other and can be transformed and integrated.…”
Section: Introductionmentioning
confidence: 99%