2003
DOI: 10.1002/asi.10334
|View full text |Cite
|
Sign up to set email alerts
|

A comprehensive and systematic model of user evaluation of Web search engines: II. An evaluation by undergraduates

Abstract: This paper presents an application of the model described in Part I to the evaluation of Web search engines by undergraduates. The study observed how 36 undergraduate used four major search engines to find information for their own individual problems and how they evaluated these engines based on actual interaction with the search engines. User evaluation was based on 16 performance measures representing five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Non-performa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
51
0
1

Year Published

2003
2003
2015
2015

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 58 publications
(52 citation statements)
references
References 16 publications
(40 reference statements)
0
51
0
1
Order By: Relevance
“…Search tasks/topics were used at TREC and CLEF interactive tracks (Over, 1997;Gonzalo et al, 2006); while Jose et al (1998);Borlund (2000); White et al (2007) and Petrelli (2008) adopted Borlund's simulated work task. In the same context, another approach to achieve realism and motivate and engage the recruited subjects in the evaluation is to let them choose the search tasks from a pool of available tasks, for instance, in different domains (Spink, 2002;Su, 2003;White et al, 2007;Joho et al, 2008).…”
Section: Tasks and Topicsmentioning
confidence: 99%
“…Search tasks/topics were used at TREC and CLEF interactive tracks (Over, 1997;Gonzalo et al, 2006); while Jose et al (1998);Borlund (2000); White et al (2007) and Petrelli (2008) adopted Borlund's simulated work task. In the same context, another approach to achieve realism and motivate and engage the recruited subjects in the evaluation is to let them choose the search tasks from a pool of available tasks, for instance, in different domains (Spink, 2002;Su, 2003;White et al, 2007;Joho et al, 2008).…”
Section: Tasks and Topicsmentioning
confidence: 99%
“…In order to evaluate the proposed algorithm we adopted the user-based approach [9]- [11]. Many researchers adopted the user-based approach to develop effectiveness evaluation of relevancy search engine because the system-based approach ignores user's Perception, needs, and searching behavior in real-life situations.…”
Section: Simulationmentioning
confidence: 99%
“…Choosing tasks from collections such as the TREC 2005 Hard Track 1 alleviates the burden of having to create difficult yet understandable tasks; however it is important to ensure that the starting query is representative of the short queries commonly used in Web search [2]. Another alternative is to have the participants use self-identified search topics [22,30,33]. However, making comparisons between the performance of participants then becomes difficult.…”
Section: Laboratory Studiesmentioning
confidence: 99%
“…A method we have found to be effective is based on participants assigning relevance scores to the documents they consider [30,33], without viewing the actual documents. Measurements of task completion times are based on the participants having identified a pre-determined number of relevant documents (e.g., 10 relevant documents).…”
Section: Laboratory Studiesmentioning
confidence: 99%
See 1 more Smart Citation