BackgroundComputer-mediated educational applications can provide a self-paced, interactive environment to deliver educational content to individuals about their health condition. These programs have been used to deliver health-related information about a variety of topics, including breast cancer screening, asthma management, and injury prevention. We have designed the Patient Education and Motivation Tool (PEMT), an interactive computer-based educational program based on behavioral, cognitive, and humanistic learning theories. The tool is designed to educate users and has three key components: screening, learning, and evaluation.ObjectiveThe objective of this tutorial is to illustrate a heuristic evaluation using a computer-based patient education program (PEMT) as a case study. The aims were to improve the usability of PEMT through heuristic evaluation of the interface; to report the results of these usability evaluations; to make changes based on the findings of the usability experts; and to describe the benefits and limitations of applying usability evaluations to PEMT.MethodsPEMT was evaluated by three usability experts using Nielsen’s usability heuristics while reviewing the interface to produce a list of heuristic violations with severity ratings. The violations were sorted by heuristic and ordered from most to least severe within each heuristic.ResultsA total of 127 violations were identified with a median severity of 3 (range 0 to 4 with 0 = no problem to 4 = catastrophic problem). Results showed 13 violations for visibility (median severity = 2), 38 violations for match between system and real world (median severity = 2), 6 violations for user control and freedom (median severity = 3), 34 violations for consistency and standards (median severity = 2), 11 violations for error severity (median severity = 3), 1 violation for recognition and control (median severity = 3), 7 violations for flexibility and efficiency (median severity = 2), 9 violations for aesthetic and minimalist design (median severity = 2), 4 violations for help users recognize, diagnose, and recover from errors (median severity = 3), and 4 violations for help and documentation (median severity = 4).ConclusionWe describe the heuristic evaluation method employed to assess the usability of PEMT, a method which uncovers heuristic violations in the interface design in a quick and efficient manner. Bringing together usability experts and health professionals to evaluate a computer-mediated patient education program can help to identify problems in a timely manner. This makes this method particularly well suited to the iterative design process when developing other computer-mediated health education programs. Heuristic evaluations provided a means to assess the user interface of PEMT.
Risk of type 1 diabetes at 3 years is high for initially multiple and single Ab+ IT and multiple Ab+ NT. Genetic predisposition, age, and male sex are significant risk factors for development of Ab+ in twins.
Understanding and describing the physical capabilities of users with motor impairments is a significant challenge for accessibility researchers and system designers alike. Current practice is to use descriptors such as medical diagnoses to represent a person's physical capabilities. This solution is not adequate due to similarities in functional capabilities between diagnoses as well as differences in capabilities within a diagnosis. An alternative is user self-reporting or observation by another person, but these solutions can be problematic because they rely on individual interpretations of capabilities and may introduce unwanted bias. The current research focuses on defining an objective, quantifiable, repeatable, and efficient methodology for assessing an individual's physical capabilities in relation to use of information technologies. Thirty-one users with a range of physical capabilities participated in the evaluation of the proposed performance-based functional assessment methodology. Building on the current standard for such assessments, multiple observers provided independent assessments that served as the gold standard for comparison. Promising metrics produced through the performance-based assessment were identified through comparisons with these observer evaluations. Predictive models were then generated via regression and correlation analysis. The models were validated using a three-fold validation process. Results from this initial research are encouraging, with the resulting models explaining up to 92% of the variance in user capabilities. Directions for future research are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.