One of the biggest challenges of utilizing artificial intelligence (AI) in medicine is that physicians are reluctant to trust and adopt something that they do not fully understand and regarded as a “black box.” Machine Learning (ML) can assist in reading radiological, endoscopic and histological pictures, suggesting diagnosis and predict disease outcome, and even recommending therapy and surgical decisions. However, clinical adoption of these AI tools has been slow because of a lack of trust. Besides clinician's doubt, patients lacking confidence with AI‐powered technologies also hamper development. While they may accept the reality that human errors can occur, little tolerance of machine error is anticipated. In order to implement AI medicine successfully, interpretability of ML algorithm needs to improve. Opening the black box in AI medicine needs to take a stepwise approach. Small steps of biological explanation and clinical experience in ML algorithm can help to build trust and acceptance. AI software developers will have to clearly demonstrate that when the ML technologies are integrated into the clinical decision‐making process, they can actually help to improve clinical outcome. Enhancing interpretability of ML algorithm is a crucial step in adopting AI in medicine.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.