Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Healthcare data are inherently multimodal, including electronic health records (EHR), medical images, and multi-omics data. Combining these multimodal data sources contributes to a better understanding of human health and provides optimal personalized healthcare. The most important question when using multimodal data is how to fuse them—a field of growing interest among researchers. Advances in artificial intelligence (AI) technologies, particularly machine learning (ML), enable the fusion of these different data modalities to provide multimodal insights. To this end, in this scoping review, we focus on synthesizing and analyzing the literature that uses AI techniques to fuse multimodal medical data for different clinical applications. More specifically, we focus on studies that only fused EHR with medical imaging data to develop various AI methods for clinical applications. We present a comprehensive analysis of the various fusion strategies, the diseases and clinical outcomes for which multimodal fusion was used, the ML algorithms used to perform multimodal fusion for each clinical application, and the available multimodal medical datasets. We followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. We searched Embase, PubMed, Scopus, and Google Scholar to retrieve relevant studies. After pre-processing and screening, we extracted data from 34 studies that fulfilled the inclusion criteria. We found that studies fusing imaging data with EHR are increasing and doubling from 2020 to 2021. In our analysis, a typical workflow was observed: feeding raw data, fusing different data modalities by applying conventional machine learning (ML) or deep learning (DL) algorithms, and finally, evaluating the multimodal fusion through clinical outcome predictions. Specifically, early fusion was the most used technique in most applications for multimodal learning (22 out of 34 studies). We found that multimodality fusion models outperformed traditional single-modality models for the same task. Disease diagnosis and prediction were the most common clinical outcomes (reported in 20 and 10 studies, respectively) from a clinical outcome perspective. Neurological disorders were the dominant category (16 studies). From an AI perspective, conventional ML models were the most used (19 studies), followed by DL models (16 studies). Multimodal data used in the included studies were mostly from private repositories (21 studies). Through this scoping review, we offer new insights for researchers interested in knowing the current state of knowledge within this research field.
Healthcare data are inherently multimodal, including electronic health records (EHR), medical images, and multi-omics data. Combining these multimodal data sources contributes to a better understanding of human health and provides optimal personalized healthcare. The most important question when using multimodal data is how to fuse them—a field of growing interest among researchers. Advances in artificial intelligence (AI) technologies, particularly machine learning (ML), enable the fusion of these different data modalities to provide multimodal insights. To this end, in this scoping review, we focus on synthesizing and analyzing the literature that uses AI techniques to fuse multimodal medical data for different clinical applications. More specifically, we focus on studies that only fused EHR with medical imaging data to develop various AI methods for clinical applications. We present a comprehensive analysis of the various fusion strategies, the diseases and clinical outcomes for which multimodal fusion was used, the ML algorithms used to perform multimodal fusion for each clinical application, and the available multimodal medical datasets. We followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. We searched Embase, PubMed, Scopus, and Google Scholar to retrieve relevant studies. After pre-processing and screening, we extracted data from 34 studies that fulfilled the inclusion criteria. We found that studies fusing imaging data with EHR are increasing and doubling from 2020 to 2021. In our analysis, a typical workflow was observed: feeding raw data, fusing different data modalities by applying conventional machine learning (ML) or deep learning (DL) algorithms, and finally, evaluating the multimodal fusion through clinical outcome predictions. Specifically, early fusion was the most used technique in most applications for multimodal learning (22 out of 34 studies). We found that multimodality fusion models outperformed traditional single-modality models for the same task. Disease diagnosis and prediction were the most common clinical outcomes (reported in 20 and 10 studies, respectively) from a clinical outcome perspective. Neurological disorders were the dominant category (16 studies). From an AI perspective, conventional ML models were the most used (19 studies), followed by DL models (16 studies). Multimodal data used in the included studies were mostly from private repositories (21 studies). Through this scoping review, we offer new insights for researchers interested in knowing the current state of knowledge within this research field.
Missing Alzheimer's disease (AD) data is prevalent and poses significant challenges for AD diagnosis. Previous studies have explored various data imputation approaches on AD data, but the systematic evaluation of deep learning algorithms for imputing heterogeneous and comprehensive AD data is limited. This study investigates the efficacy of denoising autoencoder‐based imputation of missing key features of heterogeneous data that comprised tau‐PET, MRI, cognitive and functional assessments, genotype, sociodemographic, and medical history. The authors focused on extreme (≥40%) missing at random of key features which depend on AD progression; identified as the history of a mother having AD, APoE ε4 alleles, and clinical dementia rating. Along with features selected using traditional feature selection methods, latent features extracted from the denoising autoencoder are incorporated for subsequent classification. Using random forest classification with 10‐fold cross‐validation, robust AD predictive performance of imputed datasets (accuracy: 79%–85%; precision: 71%–85%) across missingness levels, and high recall values with 40% missingness are found. Further, the feature‐selected dataset using feature selection methods, including autoencoder, demonstrated higher classification score than that of the original complete dataset. These results highlight the effectiveness and robustness of autoencoder in imputing crucial information for reliable AD prediction in AI‐based clinical decision support systems.
Background The early diagnosis of Alzheimer's disease (AD) and mild cognitive impairment (MCI) remains a significant challenge in neurology, with conventional methods often limited by subjectivity and variability in interpretation. Integrating deep learning with artificial intelligence (AI) in magnetic resonance imaging (MRI) analysis emerges as a transformative approach, offering the potential for unbiased, highly accurate diagnostic insights. Objective A meta-analysis was designed to analyze the diagnostic accuracy of deep learning of MRI images on AD and MCI models. Methods A meta-analysis was performed across PubMed, Embase, and Cochrane library databases following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, focusing on the diagnostic accuracy of deep learning. Subsequently, methodological quality was assessed using the QUADAS-2 checklist. Diagnostic measures, including sensitivity, specificity, likelihood ratios, diagnostic odds ratio, and area under the receiver operating characteristic curve (AUROC) were analyzed, alongside subgroup analyses for T1-weighted and non-T1-weighted MRI. Results A total of 18 eligible studies were identified. The Spearman correlation coefficient was -0.6506. Meta-analysis showed that the combined sensitivity and specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were 0.84, 0.86, 6.0, 0.19, and 32, respectively. The AUROC was 0.92. The quiescent point of hierarchical summary of receiver operating characteristic (HSROC) was 3.463. Notably, the images of 12 studies were acquired by T1-weighted MRI alone, and those of the other 6 were gathered by non-T1-weighted MRI alone. Conclusion Overall, deep learning of MRI for the diagnosis of AD and MCI showed good sensitivity and specificity and contributed to improving diagnostic accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.