2016
DOI: 10.1093/jamia/ocw046
|View full text |Cite
|
Sign up to set email alerts
|

An appraisal of published usability evaluations of electronic health records via systematic review

Abstract: A review of the literature demonstrates a paucity of quality published studies describing scientifically valid and reproducible usability evaluations at various stages of EHR system development. A lack of formal and standardized reporting of EHR usability evaluation results is a major contributor to this knowledge gap, and efforts to improve this deficiency will be one step of moving the field of usability engineering forward.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
80
1
3

Year Published

2017
2017
2023
2023

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 97 publications
(89 citation statements)
references
References 89 publications
1
80
1
3
Order By: Relevance
“…This has been shown for intervention studies, where beliefs and expectations have been found to bias the results towards a higher probability of a type I error (i.e., false-positive result) [62]. The lack of methodological detail on the reports of studies on usability has already been highlighted [23,63]. The exponential growth and the enormous potential of mobile apps to change the paradigm of health interventions, by increasing the access of individuals to health services at lower costs, requires a rigorous and methodologically sound assessment.…”
Section: Discussionmentioning
confidence: 97%
“…This has been shown for intervention studies, where beliefs and expectations have been found to bias the results towards a higher probability of a type I error (i.e., false-positive result) [62]. The lack of methodological detail on the reports of studies on usability has already been highlighted [23,63]. The exponential growth and the enormous potential of mobile apps to change the paradigm of health interventions, by increasing the access of individuals to health services at lower costs, requires a rigorous and methodologically sound assessment.…”
Section: Discussionmentioning
confidence: 97%
“…From the methodological point of view, we found that survey or distributed questionnaires among end-users are the most common method employed in usability evaluations of health-related information systems (33). Beyond the usefulness of surveys for gathering data in these kinds of studies, they do not allow evaluators to identify individual usability problems, so, in considering this limitation, we applied heuristic evaluation in our study by recruiting 10 external evaluators to detect individual usability problems that could be targeted for improvement of the implemented system.…”
Section: Discussionmentioning
confidence: 99%
“…Careful systematic development of CDS is necessary to ensure that it works as intended once implemented in practice. Researchers have found a number of problems in EHR usability testing (Ratwani, Benda, Hettinger, & Fairbanks, 2015) and a paucity of high-quality studies of EHR usability with two thirds performed at prepost implementation without preclinical usability testing reported (Ellsworth et al, 2017). In addition, half of the largest U.S. EHR vendors are not meeting standards for usability testing, with two thirds conducting tests with fewer than the minimum 15 participants, as suggested by the National Institute of Standards and Technology, and one fifth conducting at least half of their tests using subjects with no clinical background (Ratwani et al, 2015).…”
mentioning
confidence: 99%