Although few positive findings generally favored patient access, the literature is unclear on whether providing patients access to their medical records improves quality.
Background Patients are increasingly seeking Web-based symptom checkers to obtain diagnoses. However, little is known about the characteristics of the patients who use these resources, their rationale for use, and whether they find them accurate and useful. Objective The study aimed to examine patients’ experiences using an artificial intelligence (AI)–assisted online symptom checker. Methods An online survey was administered between March 2, 2018, through March 15, 2018, to US users of the Isabel Symptom Checker within 6 months of their use. User characteristics, experiences of symptom checker use, experiences discussing results with physicians, and prior personal history of experiencing a diagnostic error were collected. Results A total of 329 usable responses was obtained. The mean respondent age was 48.0 (SD 16.7) years; most were women (230/304, 75.7%) and white (271/304, 89.1%). Patients most commonly used the symptom checker to better understand the causes of their symptoms (232/304, 76.3%), followed by for deciding whether to seek care (101/304, 33.2%) or where (eg, primary or urgent care: 63/304, 20.7%), obtaining medical advice without going to a doctor (48/304, 15.8%), and understanding their diagnoses better (39/304, 12.8%). Most patients reported receiving useful information for their health problems (274/304, 90.1%), with half reporting positive health effects (154/302, 51.0%). Most patients perceived it to be useful as a diagnostic tool (253/301, 84.1%), as a tool providing insights leading them closer to correct diagnoses (231/303, 76.2%), and reported they would use it again (278/304, 91.4%). Patients who discussed findings with their physicians (103/213, 48.4%) more often felt physicians were interested (42/103, 40.8%) than not interested in learning about the tool’s results (24/103, 23.3%) and more often felt physicians were open (62/103, 60.2%) than not open (21/103, 20.4%) to discussing the results. Compared with patients who had not previously experienced diagnostic errors (missed or delayed diagnoses: 123/304, 40.5%), patients who had previously experienced diagnostic errors (181/304, 59.5%) were more likely to use the symptom checker to determine where they should seek care (15/123, 12.2% vs 48/181, 26.5%; P=.002), but they less often felt that physicians were interested in discussing the tool’s results (20/34, 59% vs 22/69, 32%; P=.04). Conclusions Despite ongoing concerns about symptom checker accuracy, a large patient-user group perceived an AI-assisted symptom checker as useful for diagnosis. Formal validation studies evaluating symptom checker accuracy and effectiveness in real-world practice could provide additional useful information about their benefit.
Widespread use of health information technology (IT) could potentially increase patients' access to their health information and facilitate future goals of advancing patient-centered care. Despite having increased access to their health data, patients do not always understand this information or its implications, and digital health data can be difficult to navigate when displayed in a small-format, complex interface. In this paper, we discuss two forms of patient-facing health IT tools-patient portals and applications (apps)-and highlight how, despite several limitations of each, combining high-yield features of mobile health (mHealth) apps with portals could increase patient engagement and self-management and be more effective than either of them alone. Patient portal adoption is variable, and due to design and interface limitations and health literacy issues, many people find the portal difficult to use. Conversely, apps have experienced rapid adoption and traditionally have more consumer-friendly features with easy log-in access, real-time tracking, and simplified data display. These features make the applications more intuitive and easy-to-use than patient portals. While apps have their own limitations and might serve different purposes, patient portals could adopt some high-yield features and functions of apps that lead to engagement success with patients. We thus suggest that to improve user experience with future portals, developers could look towards mHealth apps in design, function, and user interface. Adding new features to portals may improve their use and empower patients to track their overall health and disease states. Nevertheless, both these health IT tools should be subjected to rigorous evaluation to ensure they meet their potential in improving patient outcomes.
Background Diagnostic errors in primary care are harmful but difficult to detect. We tested an electronic health record (EHR)-based method to detect diagnostic errors in routine primary care practice. Methods We conducted a retrospective study of primary care visit records “triggered” through electronic queries for possible evidence of diagnostic errors: Trigger 1: A primary care index visit followed by unplanned hospitalization within 14 days; and Trigger 2: A primary care index visit followed by ≥ 1 unscheduled visit(s) within 14 days. Control visits met neither criterion. Electronic trigger queries were applied to EHR repositories at two large healthcare systems between October 1, 2006 and September 30, 2007. Blinded physician-reviewers independently determined presence or absence of diagnostic errors in selected triggered and control visits. An error was defined as a missed opportunity to make or pursue the correct diagnosis when adequate data was available at the index visit. Disagreements were resolved by an independent third reviewer. Results Queries were applied to 212,165 visits. On record review, we found diagnostic errors in 141 of 674 Trigger 1-positive records (PPV=20.9%, 95% CI, 17.9%-24.0%) and 36 of 669 Trigger 2-positive records (PPV=5.4%, 95% CI, 3.7%-7.1%). The control PPV of 2.1% (95% CI, 0.1%-3.3%) was significantly lower than that of both triggers (P ≤ .002). Inter-rater reliability was modest, though higher than in comparable previous studies (κ = 0.37 [95% CI=0.31-0.44]). Conclusions While physician agreement on diagnostic error remains low, an EHR-facilitated surveillance methodology could be useful for gaining insight into the origin of these errors.
ObjectiveOnline portals provide patients with access to their test results, but it is unknown how patients use these tools to manage results and what information is available to promote understanding. We conducted a mixed-methods study to explore patients’ experiences and preferences when accessing their test results via portals.Materials and MethodsWe conducted 95 interviews (13 semistructured and 82 structured) with adults who viewed a test result in their portal between April 2015 and September 2016 at 4 large outpatient clinics in Houston, Texas. Semistructured interviews were coded using content analysis and transformed into quantitative data and integrated with the structured interview data. Descriptive statistics were used to summarize the structured data.ResultsNearly two-thirds (63%) did not receive any explanatory information or test result interpretation at the time they received the result, and 46% conducted online searches for further information about their result. Patients who received an abnormal result were more likely to experience negative emotions (56% vs 21%; P = .003) and more likely to call their physician (44% vs 15%; P = .002) compared with those who received normal results.DiscussionStudy findings suggest that online portals are not currently designed to present test results to patients in a meaningful way. Patients experienced negative emotions often with abnormal results, but sometimes even with normal results. Simply providing access via portals is insufficient; additional strategies are needed to help patients interpret and manage their online test results.ConclusionGiven the absence of national guidance, our findings could help strengthen policy and practice in this area and inform innovations that promote patient understanding of test results.
Objective Diagnostic errors in primary care are harmful but poorly studied. To facilitate understanding of diagnostic errors in real-world primary care settings using electronic health records (EHRs), this study explored the use of the Situational Awareness (SA) framework from aviation human factors research. Methods A mixed-methods study was conducted involving reviews of EHR data followed by semi-structured interviews of selected providers from two institutions in the US. The study population included 380 consecutive patients with colorectal and lung cancers diagnosed between February 2008 and January 2009. Using a pre-tested data collection instrument, trained physicians identified diagnostic errors, defined as lack of timely action on one or more established indications for diagnostic work-up for lung and colorectal cancers. Twenty-six providers involved in cases with and without errors were interviewed. Interviews probed for providers' lack of SA and how this may have influenced the diagnostic process. Results Of 254 cases meeting inclusion criteria, errors were found in 30 (32.6%) of 92 lung cancer cases and 56 (33.5%) of 167 colorectal cancer cases. Analysis of interviews related to error cases revealed evidence of lack of one of four levels of SA applicable to primary care practice: information perception, information comprehension, forecasting future events, and choosing appropriate action based on the first three levels. In cases without error, the application of the SA framework provided insight into processes involved in attention management. Conclusions A framework of SA can help analyze and understand diagnostic errors in primary care settings that use EHRs.
Parents may react less negatively in terms of perceived competence, physician confidence and trust, and intention to adhere when diagnostic uncertainty is communicated using implicit strategies, such as using broad differential diagnoses or most likely diagnoses. Evidence-based strategies to communicate diagnostic uncertainty to patients need further development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.