Major depressive disorder is a debilitating disease affecting 264 million people worldwide. While many antidepressant medications are available, few clinical guidelines support choosing among them. Decision support tools (DSTs) embodying machine learning models may help improve the treatment selection process, but often fail in clinical practice due to poor system integration.We use an iterative, co-design process to investigate clinicians' perceptions of using DSTs in antidepressant treatment decisions. We identify ways in which DSTs need to engage with the healthcare sociotechnical system, including clinical processes, patient preferences, resource constraints, and domain knowledge. Our results suggest that clinical DSTs should be designed as multi-user systems that support patient-provider collaboration and offer on-demand explanations that address discrepancies between predictions and current standards of care. Through this work, we demonstrate how current trends in explainable AI may be inappropriate for clinical environments and consider paths towards designing these tools for real-world medical systems. CCS CONCEPTS• Human-centered computing → User centered design; • Applied computing → Health care information systems; • Information systems → Decision support systems.
Background Open notes invite patients and families to read ambulatory visit notes through the patient portal. Little is known about the extent to which they identify and speak up about perceived errors. Understanding the barriers to speaking up can inform quality improvements. Objective To describe patient and family attitudes, experiences, and barriers related to speaking up about perceived serious note errors. Methods Mixed method analysis of a 2016 electronic survey of patients and families at 2 northeast US academic medical centers. Participants had active patient portal accounts and at least 1 note available in the preceding 12 months. Results 6913 adult patients (response rate 28%) and 3672 pediatric families (response rate 17%) completed the survey. In total, 8724/9392 (93%) agreed that reporting mistakes improves patient safety. Among 8648 participants who read a note, 1434 (17%) perceived ≥1 mistake. 627/1434 (44%) reported the mistake was serious and 342/627 (56%) contacted their provider. Participants who self-identified as Black or African American, Asian, “other,” or “multiple” race(s) (OR 0.50; 95% CI (0.26,0.97)) or those who reported poorer health (OR 0.58; 95% CI (0.37,0.90)) were each less likely to speak up than white or healthier respondents, respectively. The most common barriers to speaking up were not knowing how to report a mistake (61%) and avoiding perception as a “troublemaker” (34%). Qualitative analysis of 476 free-text suggestions revealed practical recommendations and proposed innovations for partnering with patients and families. Conclusions About half of patients and families who perceived a serious mistake in their notes reported it. Identified barriers demonstrate modifiable issues such as establishing clear mechanisms for reporting and more challenging issues such as creating a supportive culture. Respondents offered new ideas for engaging patients and families in improving note accuracy.
Objectives Federated learning (FL) allows multiple institutions to collaboratively develop a machine learning algorithm without sharing their data. Organizations instead share model parameters only, allowing them to benefit from a model built with a larger dataset while maintaining the privacy of their own data. We conducted a systematic review to evaluate the current state of FL in healthcare and discuss the limitations and promise of this technology. Methods We conducted a literature search using PRISMA guidelines. At least two reviewers assessed each study for eligibility and extracted a predetermined set of data. The quality of each study was determined using the TRIPOD guideline and PROBAST tool. Results 13 studies were included in the full systematic review. Most were in the field of oncology (6 of 13; 46.1%), followed by radiology (5 of 13; 38.5%). The majority evaluated imaging results, performed a binary classification prediction task via offline learning (n = 12; 92.3%), and used a centralized topology, aggregation server workflow (n = 10; 76.9%). Most studies were compliant with the major reporting requirements of the TRIPOD guidelines. In all, 6 of 13 (46.2%) of studies were judged at high risk of bias using the PROBAST tool and only 5 studies used publicly available data. Conclusion Federated learning is a growing field in machine learning with many promising uses in healthcare. Few studies have been published to date. Our evaluation found that investigators can do more to address the risk of bias and increase transparency by adding steps for data homogeneity or sharing required metadata and code.
Dr Wei was supported for this work through a grant from the California Pacific Medical Center Foundation.Conflicts of interest: None declared. Dr Mansh had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Drs Gao and Mansh were responsible for drafting the manuscript, and Drs Wei and Mansh were responsible for statistical analysis. All the authors were responsible for the study concept and design; acquisition, analysis, and interpretation of data; critical revision of the manuscript for
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.