2009
DOI: 10.1186/1471-2105-10-s10-s14
|View full text |Cite
|
Sign up to set email alerts
|

A user-centred evaluation framework for the Sealife semantic web browsers

Abstract: BackgroundSemantically-enriched browsing has enhanced the browsing experience by providing contextualised dynamically generated Web content, and quicker access to searched-for information. However, adoption of Semantic Web technologies is limited and user perception from the non-IT domain sceptical. Furthermore, little attention has been given to evaluating semantic browsers with real users to demonstrate the enhancements and obtain valuable feedback. The Sealife project investigates semantic browsing and its … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2010
2010
2016
2016

Publication Types

Select...
2
2
1

Relationship

3
2

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…However, it is paramount to draw lessons from body of research from other domains experimenting with evaluation with real-world users and systems and additional challenges identified for collecting user feedback First, users often perceive their online behaviors and preferences differently from their actual behaviors Roy et al 2010]. Additionally, live user experiments, especially in the case of a prototype or research project, often suffer from unpolished user interfaces and the propensity of bugs, which affects the user experience and hence the outcome of the evaluation [Oliver et al 2009]. It is even more challenging to evaluate recommender systems that process implicit user feedback.…”
Section: Discussionmentioning
confidence: 99%
“…However, it is paramount to draw lessons from body of research from other domains experimenting with evaluation with real-world users and systems and additional challenges identified for collecting user feedback First, users often perceive their online behaviors and preferences differently from their actual behaviors Roy et al 2010]. Additionally, live user experiments, especially in the case of a prototype or research project, often suffer from unpolished user interfaces and the propensity of bugs, which affects the user experience and hence the outcome of the evaluation [Oliver et al 2009]. It is even more challenging to evaluate recommender systems that process implicit user feedback.…”
Section: Discussionmentioning
confidence: 99%
“…A detailed breakdown of methods and results is beyond the scope of this paper, but more information can be found in [3]. This paper will focus on the aspects of the study relevant to triangulation.…”
Section: Sealife Swb Evaluationmentioning
confidence: 99%
“…Details of the questionnaires can be found in [3]; the results paint an informative picture of users' attitudes. GoPubMed/GoGene were rated the highest in the dimensions of likeability, information findability, relevance, and system speed.…”
Section: Questionnairesmentioning
confidence: 99%
See 1 more Smart Citation