2020
DOI: 10.1177/0018720819895098
|View full text |Cite
|
Sign up to set email alerts
|

Usability: Adoption, Measurement, Value

Abstract: Objective We searched for the application of usability in the literature with a focus on adoption, measurements employed, and demonstrated value. Five human factors domains served as a platform for our reflection, which included the last 20 years. Background As usability studies continue to accumulate, there has been only a little past reflection on usability and contributions across a variety of applications. Our research provides a background for general usability, and we target specific usability research s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 118 publications
0
5
0
Order By: Relevance
“…The data for average cognitive load has been summarised based on the results for each of the metrics by considering all users. In alignment of the study conducted by [47] and [39], positive relationships are observed for tasks with highest effort for cognitive load with those assigned for the following categories: for highest task time [48]; [27], the number of touches [32] error rate [1], count of help visit [36]; [37], reaction time [48]; [44] and duration on help [34]. Firstly, the tool is able to support researchers within human-computer interactions in having a framework that automates cognitive load measurement metrics with mobile applications.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The data for average cognitive load has been summarised based on the results for each of the metrics by considering all users. In alignment of the study conducted by [47] and [39], positive relationships are observed for tasks with highest effort for cognitive load with those assigned for the following categories: for highest task time [48]; [27], the number of touches [32] error rate [1], count of help visit [36]; [37], reaction time [48]; [44] and duration on help [34]. Firstly, the tool is able to support researchers within human-computer interactions in having a framework that automates cognitive load measurement metrics with mobile applications.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…Task success rate (TS) [19]; [26]; [27] Number of errors (NOE) [28]; [18]; [1] Task time (TT) [29]; [19]; [30] Number of touches (NOT) [29]; [28]; [31]; [32] Duration on help (DOH) [33]; [34]; [35] Help visit count (HA) [33]; [36]; [37] Total effort (EF) [39]; [40]; [39] Reaction time (RT) [19]; [41]; [42]; [43]; [44] Based upon a review of measurements of cognitive load and the objective, low-level metrics, the proposed metrics for cognitive load are shown in Table 3. This constitutes proposed logging of metrics of cognitive load (CLOG), which may be suitable for measurement in a way that is automated through the logging of a user's interaction.…”
Section: Clm-dtt Representative Referencesmentioning
confidence: 99%
“…Usability studies originated from the application of personal computer products, which were born in the United States in the 1980s, by coordinating computer operating systems to make it easier for ordinary users to use high-tech products. In the late 1980s, researchers began to pay attention to prototype design and iterative evaluation of product development process and proposed to collect product information and usage data through product sample test to carry out usability evaluation of products, to provide a reference for product design principles [2]. Usability evaluation is a multifactor concept that involves ease of use, system effectiveness, user satisfaction, and objectivespecific evaluations that relate these different levels of factors to the actual user environment and play an important role in the product decision-making process.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Using context specific guidelines when evaluating interactive systems have been shown to result in an increase in the number of usability problems uncovered when compared to evaluations that only use generic heuristics ( Afacan and Erbug, 2009 ; Jimenez et al, 2012 ). Further, usability evaluations need to be domain specific – usability criteria in consumer products design can significantly differ from usability criteria for the transportation industry, for example ( Mator et al, 2020 ). Additionally, usability evaluations that incorporate a combination of methods such as heuristic evaluations and user testing when feasible, allow for discovery of more usability problems within a shorter duration ( Solano et al, 2016 ).…”
Section: Background and Related Workmentioning
confidence: 99%