2009
DOI: 10.1007/978-3-642-03655-2_25
|View full text |Cite
|
Sign up to set email alerts
|

Evidence Based Design of Heuristics for Computer Assisted Assessment

Abstract: Abstract. The use of heuristics for the evaluation of interfaces is a well studied area. Currently there appear to be two main research areas in relation to heuristics: the analysis of methods to improve the effectiveness of heuristic evaluations; and the development of new heuristic sets for novel and specialised domains. This paper proposes an evidence based design approach to the development of domain specific heuristics and shows how this method was applied within the context of computer assisted assessmen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 21 publications
0
9
0
Order By: Relevance
“…The aggregated forms were analysed by the first author and an educational technologist to verify that the problems predicted were not false positives. The heuristic evaluation method is an inspection method where evaluators are required to predict problems that they think users will encounter; this means that on occasions, usability problems can be included in the aggregated list that are not real problems in that a user would not have a problem, hence the phrase “false positives.” To understand the effectiveness of the evaluators, the aggregated lists were compared with an earlier corpus of known usability problems that were derived to synthesise the original heuristic set (Sim et al , ). This corpus consisted of 34 usability problems synthesised from an initial corpus of over 300 usability problems that were systematically merged and filtered to produce the final corpus.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The aggregated forms were analysed by the first author and an educational technologist to verify that the problems predicted were not false positives. The heuristic evaluation method is an inspection method where evaluators are required to predict problems that they think users will encounter; this means that on occasions, usability problems can be included in the aggregated list that are not real problems in that a user would not have a problem, hence the phrase “false positives.” To understand the effectiveness of the evaluators, the aggregated lists were compared with an earlier corpus of known usability problems that were derived to synthesise the original heuristic set (Sim et al , ). This corpus consisted of 34 usability problems synthesised from an initial corpus of over 300 usability problems that were systematically merged and filtered to produce the final corpus.…”
Section: Methodsmentioning
confidence: 99%
“…Nielsen's heuristics were used for the evaluation of Questionmark Perception and WebCT (Sim, Read & Holifield, ), and despite the fact that usability problems were revealed, the heuristic set was found to be ineffective, as problems were not mapped to heuristics or given a severity ratings. Therefore, using a qualitative research approach, a set of heuristics was synthesised based upon a corpus of usability problems associated with CAA (Sim, Read & Cockton, ). Thematic analysis was then used to identify the core themes from the corpus, and these were then used to create the heuristic set presented in Table .…”
Section: Caa Heuristicsmentioning
confidence: 99%
“…Indeed, Nielsen (as cited in Rogers et al [44]), acknowledges the inapplicability of this set to contemporary applications, and suggests developing application-specific heuristics when necessary. A wide number of application-specific or domain heuristics have emerged, includiFng the Pierotti's Xerox Heuristic Evaluation Checklist [41]; Travis's web usability guidelines [53]; Desurvire et al's heuristics for evaluating the playability of games [14]; and, Sim et al's heuristics for evaluating the usability of computer-assisted assessment (CAA) applications [48]. One of the most prominent sets of heuristics, prior to Nielsen coining the term 'heuristic evaluation', is most probably the 1986 set of guidelines for designing interface software by Smith and Mosier developed for the United States Air Force Systems Command [50].…”
Section: Suggested Usability and Accessibility Evaluation Methods Formentioning
confidence: 99%
“…Furthermore, Sim et al [48] defined criteria that can be used to assess the effectiveness of application-specific heuristics, in addition to the number of usability problems identified through the use of such heuristics. These criteria are correctness of terminology, and coverage and thoroughness.…”
Section: Evaluation Phasementioning
confidence: 99%
See 1 more Smart Citation