Robust cancer prognostication can enable more effective patient care and management, which may potentially improve health outcomes. Deep learning has proven to be a powerful tool to extract meaningful information from cancer patient data. In recent years it has displayed promise in quantifying prognostication by predicting patient risk. However, most current deep learning-based cancer prognosis prediction methods use only a single data source and miss out on learning from potentially rich relationships across modalities. Existing multimodal approaches are challenging to interpret in a biological or medical context, limiting real-world clinical integration as a trustworthy prognostic decision aid. Here, we developed a multimodal modeling approach that can integrate information from the central modalities of gene expression, DNA methylation, and histopathological imaging with clinical information for cancer prognosis prediction. Our multimodal modeling approach combines pathway and gene-based sparsely coded layers with patch-based graph convolutional networks to facilitate biological interpretation of the model results. We present a preliminary analysis that compares the potential applicability of combining all modalities to uni- or bi-modal approaches. Leveraging data from four cancer subtypes from the Cancer Genome Atlas, results demonstrate the encouraging performance of our multimodal approach (C-index=0.660 without clinical features; C-index=0.665 with clinical features) across four cancer subtypes versus unimodal approaches and existing state-of-the-art approaches. This work brings insight to the development of interpretable multimodal methods of applying AI to biomedical data and can potentially serve as a foundation for clinical implementations of such software. We plan to follow up this preliminary analysis with an in-depth exploration of factors to improve multimodal modeling approaches on an in-house dataset.
Deep learning models have demonstrated the remarkable ability to infer cancer patient prognosis from molecular and anatomic pathology information. Studies in recent years have demonstrated that leveraging information from complementary multimodal data can improve prognostication, further illustrating the potential utility of such methods. Model interpretation is crucial for facilitating the clinical adoption of deep learning methods by fostering practitioner understanding and trust in the technology. However, while prior works have presented novel multimodal neural network architectures as means to improve prognostication performance, these approaches: 1) do not comprehensively leverage biological and histomorphological relationships and 2) make use of emerging strategies to "pretrain" models (i.e., train models on a slightly orthogonal dataset/modeling objective) which may aid prognostication by reducing the amount of information required for achieving optimal performance. Here, we develop an interpretable multimodal modeling framework that combines DNA methylation, gene expression, and histopathology (i.e., tissue slides) data, and we compare the performances of crossmodal pretraining, contrastive learning, and transfer learning versus the standard procedure in this context. Our models outperform the existing state-of-the-art method (average 11.54% C-index increase), and baseline clinically driven models. Our results demonstrate that the selection of pretraining strategies is crucial for obtaining highly accurate prognostication models, even more so than devising an innovative model architecture, and further emphasize the all-important role of the tumor microenvironment on disease progression.
Background Deep learning models can infer cancer patient prognosis from molecular and anatomic pathology information. Recent studies that leveraged information from complementary multimodal data improved prognostication, further illustrating the potential utility of such methods. However, current approaches: 1) do not comprehensively leverage biological and histomorphological relationships and 2) make use of emerging strategies to “pretrain” models (i.e., train models on a slightly orthogonal dataset/modeling objective) which may aid prognostication by reducing the amount of information required for achieving optimal performance. In addition, model interpretation is crucial for facilitating the clinical adoption of deep learning methods by fostering practitioner understanding and trust in the technology. Methods Here, we develop an interpretable multimodal modeling framework that combines DNA methylation, gene expression, and histopathology (i.e., tissue slides) data, and we compare performance of crossmodal pretraining, contrastive learning, and transfer learning versus the standard procedure. Results Our models outperform the existing state-of-the-art method (average 11.54% C-index increase), and baseline clinically driven models (average 11.7% C-index increase). Model interpretations elucidate consideration of biologically meaningful factors in making prognosis predictions. Discussion Our results demonstrate that the selection of pretraining strategies is crucial for obtaining highly accurate prognostication models, even more so than devising an innovative model architecture, and further emphasize the all-important role of the tumor microenvironment on disease progression.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.