2010
DOI: 10.1108/14684521011084609
|View full text |Cite
|
Sign up to set email alerts
|

Performance evaluation and comparison of the five most used search engines in retrieving web resources

Abstract: PurposeThe purpose of the paper is to evaluate the performance and efficiency of the five most used search engines, i.e. Google, Yahoo!, Live, Ask, and AOL, in retrieving internet resources at specific points of time using a large number of complex queries.Design/methodology/approachIn order to examine the performance of the five search engines, five sets of experiments were conducted using 50 complex queries within two different time frames. The data were evaluated using Excel and SPSS software.FindingsThe pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
21
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(22 citation statements)
references
References 13 publications
(9 reference statements)
1
21
0
Order By: Relevance
“…The efficacy of the generated ranked result lists were measured via a renowned performance indicator in the IR field named TREC-Style Average Precision (TSAP). It has been widely used in the literature [7,13,16,17,26,27]. TSAP is a human-based evaluation criterion that quantifies the relevance of each generated result list considering an issued query.…”
Section: Discussionmentioning
confidence: 99%
“…The efficacy of the generated ranked result lists were measured via a renowned performance indicator in the IR field named TREC-Style Average Precision (TSAP). It has been widely used in the literature [7,13,16,17,26,27]. TSAP is a human-based evaluation criterion that quantifies the relevance of each generated result list considering an issued query.…”
Section: Discussionmentioning
confidence: 99%
“…Studies of search result retrieval for search engines and other information retrieval systems also inform database comparison methodologies. Deka and Lahkar (2010) analyzed five different search engines (Google, Yahoo!, Live, Ask, and AOL) and their abilities to produce and then reproduce relevant and unique resources, focusing on the first ten results. Shafi and Rather (2005) compared precision and recall for five search engines (AltaVista, Google, HotBot, Scirus, and Bioweb), using a series of twenty searches on biotechnology topics.…”
Section: Comparisons Of Other Information Retrieval Systemsmentioning
confidence: 99%
“…Stokes et al (2009) usedFriedman's Test to identify if there were significant differences between the databases in the precision, novelty, and availability, as well as an odds ratio test to rate if the databases were, by their definitions, effective, efficient, or accessible. Deka and Lahkar (2010) used ANOVA and Tukey's HSD tests to evaluate if the difference between the mean number of relevant/stable hits from each database were significantly higher than any other databases. Sewell (2011) also used the ANOVA test to discover any significance in the different values of precision and recall between CAB Abstracts platforms.…”
Section: Statistical Validation In Database Evaluation Studiesmentioning
confidence: 99%
See 2 more Smart Citations