A Scoping Review on the Progress, Applicability, and Future of Explainable Artificial Intelligence in Medicine
Raquel González-Alday,
Esteban García-Cuesta,
Casimir A. Kulikowski
et al.
Abstract:Due to the success of artificial intelligence (AI) applications in the medical field over the past decade, concerns about the explainability of these systems have increased. The reliability requirements of black-box algorithms for making decisions affecting patients pose a challenge even beyond their accuracy. Recent advances in AI increasingly emphasize the necessity of integrating explainability into these systems. While most traditional AI methods and expert systems are inherently interpretable, the recent … Show more
“…AI models, especially those implementing deep learning, present significant challenges due to their "blackbox" nature [190]. The term "black box" refers to the opaque nature of the internal working processes of their learning and decision-making functions [190,191].…”
Section: Interpretability and Explainability Of Ai Modelsmentioning
confidence: 99%
“…AI models, especially those implementing deep learning, present significant challenges due to their "blackbox" nature [190]. The term "black box" refers to the opaque nature of the internal working processes of their learning and decision-making functions [190,191]. Despite the possibility of having precise knowledge about the input data, the lack of transparency in understanding how the thinking process of the AI system led to the outcome makes it impossible to fully understand and rectify any mistakes that take place [191,192].…”
Section: Interpretability and Explainability Of Ai Modelsmentioning
confidence: 99%
“…This also affects the clinician's trust and willingness to accept the results, produced due to the lack of detailed explanations and the inability to navigate the decision-making process of the DL models. Moreover, there is a lack of insight for a clinician about the details of the statistical approximations being done due to a scarcity of technical and mathematical knowledge [190]. These problems surrounding interpretability and explainability engender ethical and legal quandaries as well.…”
Section: Interpretability and Explainability Of Ai Modelsmentioning
confidence: 99%
“…The absence of transparency in such circumstances may entail legal ramifications as well since patients can ask to be properly informed and explained regarding the utilization of their personal data [190]. Gonzalez-Alday et al [190] mentioned the GDPR (General Data Protection Regulation) as one of the latest examples of legal regulations about this matter, wherein the European Union has started to integrate the requirement of transparency of AI where there is sensitive data in use [190].…”
Section: Ethical and Legal Implicationsmentioning
confidence: 99%
“…Furthermore, the question of legal and ethical consequences also arises from the fact that there are chances of misdiagnoses [190][191][192][193]. AI may interpret various correlated factors as causation without fully understanding the reasons behind their relationship, basically overlooking important connections or mistakenly attributing factors as causation [190][191][192][193].…”
Heart failure (HF) is prevalent globally. It is a dynamic disease with varying definitions and classifications due to multiple pathophysiologies and etiologies. The diagnosis, clinical staging, and treatment of HF become complex and subjective, impacting patient prognosis and mortality. Technological advancements, like artificial intelligence (AI), have been significant roleplays in medicine and are increasingly used in cardiovascular medicine to transform drug discovery, clinical care, risk prediction, diagnosis, and treatment. Medical and surgical interventions specific to HF patients rely significantly on early identification of HF. Hospitalization and treatment costs for HF are high, with readmissions increasing the burden. AI can help improve diagnostic accuracy by recognizing patterns and using them in multiple areas of HF management. AI has shown promise in offering early detection and precise diagnoses with the help of ECG analysis, advanced cardiac imaging, leveraging biomarkers, and cardiopulmonary stress testing. However, its challenges include data access, model interpretability, ethical concerns, and generalizability across diverse populations. Despite these ongoing efforts to refine AI models, it suggests a promising future for HF diagnosis. After applying exclusion and inclusion criteria, we searched for data available on PubMed, Google Scholar, and the Cochrane Library and found 150 relevant papers. This review focuses on AI's significant contribution to HF diagnosis in recent years, drastically altering HF treatment and outcomes.
“…AI models, especially those implementing deep learning, present significant challenges due to their "blackbox" nature [190]. The term "black box" refers to the opaque nature of the internal working processes of their learning and decision-making functions [190,191].…”
Section: Interpretability and Explainability Of Ai Modelsmentioning
confidence: 99%
“…AI models, especially those implementing deep learning, present significant challenges due to their "blackbox" nature [190]. The term "black box" refers to the opaque nature of the internal working processes of their learning and decision-making functions [190,191]. Despite the possibility of having precise knowledge about the input data, the lack of transparency in understanding how the thinking process of the AI system led to the outcome makes it impossible to fully understand and rectify any mistakes that take place [191,192].…”
Section: Interpretability and Explainability Of Ai Modelsmentioning
confidence: 99%
“…This also affects the clinician's trust and willingness to accept the results, produced due to the lack of detailed explanations and the inability to navigate the decision-making process of the DL models. Moreover, there is a lack of insight for a clinician about the details of the statistical approximations being done due to a scarcity of technical and mathematical knowledge [190]. These problems surrounding interpretability and explainability engender ethical and legal quandaries as well.…”
Section: Interpretability and Explainability Of Ai Modelsmentioning
confidence: 99%
“…The absence of transparency in such circumstances may entail legal ramifications as well since patients can ask to be properly informed and explained regarding the utilization of their personal data [190]. Gonzalez-Alday et al [190] mentioned the GDPR (General Data Protection Regulation) as one of the latest examples of legal regulations about this matter, wherein the European Union has started to integrate the requirement of transparency of AI where there is sensitive data in use [190].…”
Section: Ethical and Legal Implicationsmentioning
confidence: 99%
“…Furthermore, the question of legal and ethical consequences also arises from the fact that there are chances of misdiagnoses [190][191][192][193]. AI may interpret various correlated factors as causation without fully understanding the reasons behind their relationship, basically overlooking important connections or mistakenly attributing factors as causation [190][191][192][193].…”
Heart failure (HF) is prevalent globally. It is a dynamic disease with varying definitions and classifications due to multiple pathophysiologies and etiologies. The diagnosis, clinical staging, and treatment of HF become complex and subjective, impacting patient prognosis and mortality. Technological advancements, like artificial intelligence (AI), have been significant roleplays in medicine and are increasingly used in cardiovascular medicine to transform drug discovery, clinical care, risk prediction, diagnosis, and treatment. Medical and surgical interventions specific to HF patients rely significantly on early identification of HF. Hospitalization and treatment costs for HF are high, with readmissions increasing the burden. AI can help improve diagnostic accuracy by recognizing patterns and using them in multiple areas of HF management. AI has shown promise in offering early detection and precise diagnoses with the help of ECG analysis, advanced cardiac imaging, leveraging biomarkers, and cardiopulmonary stress testing. However, its challenges include data access, model interpretability, ethical concerns, and generalizability across diverse populations. Despite these ongoing efforts to refine AI models, it suggests a promising future for HF diagnosis. After applying exclusion and inclusion criteria, we searched for data available on PubMed, Google Scholar, and the Cochrane Library and found 150 relevant papers. This review focuses on AI's significant contribution to HF diagnosis in recent years, drastically altering HF treatment and outcomes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.