This review describes recent experimental and focus group research on graphics as a method of communication about quantitative health risks. Some of the studies discussed in this review assessed effect of graphs on quantitative reasoning, others assessed effects on behavior or behavioral intentions, and still others assessed viewers' likes and dislikes. Graphical features that improve the accuracy of quantitative reasoning appear to differ from the features most likely to alter behavior or intentions. For example, graphs that make part-to-whole relationships available visually may help people attend to the relationship between the numerator (the number of people affected by a hazard) and the denominator (the entire population at risk), whereas graphs that show only the numerator appear to inflate the perceived risk and may induce risk-averse behavior. Viewers often preferred design features such as visual simplicity and familiarity that were not associated with accurate quantitative judgments. Communicators should not assume that all graphics are more intuitive than text; many of the studies found that patients' interpretations of the graphics were dependent upon expertise or instruction. Potentially useful directions for continuing research include interactions with educational level and numeracy and successful ways to communicate uncertainty about risk.
We observed that usability researchers are frequently capturing navigation-related issues even in articles that did not explicitly state navigation as a focus. Capturing and synthesizing the literature on navigation is challenging because of the lack of uniform vocabulary. Navigation is a potential target for normative recommendations for improved interaction design for safer systems. Future research in this domain, including development of normative recommendations for usability design and evaluation, will be facilitated by development of a standard terminology for describing EHR navigation.
The COVID-19 pandemic response in the United States has exposed significant gaps in information systems and processes to enable timely clinical and public health decision-making. Specifically, the use of informatics to mitigate the spread of SARS-CoV-2, support COVID-19 care delivery, and accelerate knowledge discovery bring to the forefront issues of privacy, surveillance, limits of state powers, and interoperability between public health and clinical information systems. Using a consensus building process, we critically analyze informatics-related ethical issues in light of the pandemic across three themes: (1) public health reporting and data sharing, (2) contact tracing and tracking, and (3) clinical scoring tools for critical care. We provide context and rationale for ethical considerations and recommendations that are actionable during the pandemic, and conclude with recommendations calling for long-term, broader change (beyond the pandemic) for public health organization and policy reform.
The EPIKE approach can be used successfully to identify the needs of adolescents across the digital divide to inform the design and development of mHealth apps.
Creating electronic health records that support the uniquely complex and varied needs of healthcare presents formidable challenges. To address some of these challenges we created a new model for healthcare information systems, embodied in MedWISE1, a widget-based highly configurable electronic health record (EHR) platform. Founded on the idea that providing clinician users with greater control of the EHR may result in greater fit to user needs and preferences, MedWISE allows drag/drop user configurations and the sharing of user-created elements such as custom laboratory result panels and user-created interface tabs. After reviewing the current state of EHR configurability, we describe the philosophical, theoretical and practical rationales for our model, and the specific functionality of MedWISE. The alternative approach may have several advantages for human-computer interaction, efficiency, cognition, and fit of EHR tools to different contexts and tasks. We discuss potential issues raised by this approach.
Background:Challenges in the design of electronic health records (EHRs) include designing usable systems that must meet the complex, rapidly changing, and high-stakes information needs of clinicians. The ability to move and assemble elements together on the same page has significant human-computer interaction (HCI) and efficiency advantages, and can mitigate the problems of negotiating multiple fixed screens and the associated cognitive burdens.Objective:We compare MedWISE—a novel EHR that supports user-composable displays—with a conventional EHR in terms of the number of repeat views of data elements for patient case appraisal.Design and Methods:The study used mixed-methods for examination of clinical data viewing in four patient cases. The study compared use of an experimental user-composable EHR with use of a conventional EHR, for case appraisal. Eleven clinicians used a user-composable EHR in a case appraisal task in the laboratory setting. This was compared with log file analysis of the same patient cases in the conventional EHR. We investigated the number of repeat views of the same clinical information during a session and across these two contexts, and compared them using Fisher’s exact test.Results:There was a significant difference (p<.0001) in proportion of cases with repeat data element viewing between the user-composable EHR (14.6 percent) and conventional EHR (72.6 percent).Discussion and Conclusion:Users of conventional EHRs repeatedly viewed the same information elements in the same session, as revealed by log files. Our findings are consistent with the hypothesis that conventional systems require that the user view many screens and remember information between screens, causing the user to forget information and to have to access the information a second time. Other mechanisms (such as reduction in navigation over a population of users due to interface sharing, and information selection) may also contribute to increased efficiency in the experimental system. Systems that allow a composable approach that enables the user to gather together on the same screen any desired information elements may confer cognitive support benefits that can increase productive use of systems by reducing fragmented information. By reducing cognitive overload, it can also enhance the user experience.
Background The complexity of health care data and workflow presents challenges to the study of usability in electronic health records (EHRs). Display fragmentation refers to the distribution of relevant data across different screens or otherwise far apart, requiring complex navigation for the user’s workflow. Task and information fragmentation also contribute to cognitive burden. Objective This study aims to define and analyze some of the main sources of fragmentation in EHR user interfaces (UIs); discuss relevant theoretical, historical, and practical considerations; and use granular microanalytic methods and visualization techniques to help us understand the nature of fragmentation and opportunities for EHR optimization or redesign. Methods Sunburst visualizations capture the EHR navigation structure, showing levels and sublevels of the navigation tree, allowing calculation of a new measure, the Display Fragmentation Index. Time belt visualizations present the sequences of subtasks and allow calculation of proportion per instance, a measure that quantifies task fragmentation. These measures can be used separately or in conjunction to compare EHRs as well as tasks and subtasks in workflows and identify opportunities for reductions in steps and fragmentation. We present an example use of the methods for comparison of 2 different EHR interfaces (commercial and composable) in which subjects apprehend the same patient case. Results Screen transitions were substantially reduced for the composable interface (from 43 to 14), whereas clicks (including scrolling) remained similar. Conclusions These methods can aid in our understanding of UI needs under complex conditions and tasks to optimize EHR workflows and redesign.
Internationally, researchers have been developing methods that can be used to identify, report on, mitigate, and eliminate technology-induced errors. Although there remain issues and challenges associated with the methodologies, they have been shown to improve the quality and safety of HIT. Since the first publications documenting technology-induced errors in healthcare in 2005, we have seen in a short 10 years researchers develop ways of identifying and addressing these types of errors. We have also seen organizations begin to use these approaches. Knowledge has been translated into practice in a short ten years whereas the norm for other research areas is of 20 years.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.