Motivation Cancer survival prediction can greatly assist clinicians in planning patient treatments and improving their life quality. Recent evidence suggests the fusion of multimodal data, such as genomic data and pathological images, is crucial for understanding cancer heterogeneity and enhancing survival prediction. As a powerful multimodal fusion technique, Kronecker product has shown its superiority in predicting survival. However, this technique introduces a large number of parameters that may lead to high computational cost and a risk of overfitting, thus limiting its applicability and improvement in performance. Another limitation of existing approaches using Kronecker product is that they only mine relations for one single time to learn multimodal representation and therefore face significant challenges in deeply mining rich information from multimodal data for accurate survival prediction. Results To address the above limitations, we present a novel hierarchical multimodal fusion approach named HFBSurv by employing factorized bilinear model to fuse genomic and image features step by step. Specifically, with a multiple fusion strategy HFBSurv decomposes the fusion problem into different levels and each of them integrates and passes information progressively from the low level to the high level, thus leading to the more specialized fusion procedure and expressive multimodal representation. In this hierarchical framework, both modality-specific and cross-modality attentional factorized bilinear modules are designed to not only capture and quantify complex relations from multimodal data, but also dramatically reduce computational complexity. Extensive experiments demonstrate that our method performs an effective hierarchical fusion of multimodal data and achieves consistently better performance than other methods for survival prediction. Availability HFBSurv is freely available at https://github.com/Liruiqing-ustc/HFBSurv. Supplementary information Supplementary data are available at Bioinformatics online.
Motivation Accurately predicting cancer survival is crucial for helping clinicians to plan appropriate treatments, which largely improves the life quality of cancer patients and spares the related medical costs. Recent advances of survival prediction methods suggest that integrating complementary information from different modalities, e.g., histopathological images and genomic data, plays a key role in enhancing predictive performance. Despite promising results obtained by existing multimodal methods, the disparate and heterogeneous characteristics of multimodal data cause so-called modality gap problem, which brings in dramatically diverse modality representations in feature space. Consequently, detrimental modality gaps make it difficult for comprehensive integration of multimodal information via representation learning, and therefore pose a great challenge to further improvements of cancer survival prediction. Results To solve the above problems, we propose a novel method called cross-aligned multimodal representation learning (CAMR), which generates both modality-invariant and -specific representations for more accurate cancer survival prediction. Specifically, a cross-modality representation alignment learning network is introduced to reduce modality gaps by effectively learning modality-invariant representations in a common subspace, which is achieved by aligning the distributions of different modality representations through adversarial training. Besides, we adopt a cross-modality fusion module to fuse modality-invariant representations into a unified cross-modality representation for each patient. Meanwhile, CAMR learns modality-specific representations which complement modality-invariant representations and therefore provides a holistic view of the multimodal data for cancer survival prediction. Comprehensive experiment results demonstrate that CAMR can successfully narrow modality gaps and consistently yields better performance than other survival prediction methods using multimodal data. Availability CAMR is freely available at https://github.com/wxq-ustc/CAMR Supplementary information Supplementary data are available at Bioinformatics online.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.