2022
DOI: 10.1109/tmi.2021.3108802
|View full text |Cite
|
Sign up to set email alerts
|

Relation-Aware Shared Representation Learning for Cancer Prognosis Analysis With Auxiliary Clinical Variables and Incomplete Multi-Modality Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 46 publications
0
3
0
Order By: Relevance
“… Ning et al (2021a ) present a multi-constraint latent representation learning method called McLR to achieve promising performance of cancer prognosis by learning a common subspace. Besides, Ning et al (2021b ) introduce an impressive method called Relation-aware Shared Representation learning (RaSR) to unify both representation learning and prognosis modeling into a joint framework and improve the performance of cancer prognosis. The above studies confirm that the combination of multimodal data is very helpful to enhance the performance of survival prediction thus providing a solid foundation for further research.…”
Section: Introductionmentioning
confidence: 99%
“… Ning et al (2021a ) present a multi-constraint latent representation learning method called McLR to achieve promising performance of cancer prognosis by learning a common subspace. Besides, Ning et al (2021b ) introduce an impressive method called Relation-aware Shared Representation learning (RaSR) to unify both representation learning and prognosis modeling into a joint framework and improve the performance of cancer prognosis. The above studies confirm that the combination of multimodal data is very helpful to enhance the performance of survival prediction thus providing a solid foundation for further research.…”
Section: Introductionmentioning
confidence: 99%
“…It has been applied in the medical field to tackle multimodality data to infer health or disease state of patients. For instance, Ning et al used representation learning to induce a latent shared space from multimodality medical data for Alzheimer's disease (AD) diagnosis and cancer prognosis analysis (Ning et al 2021a, Ning et al 2022. Zhou et al proposed a latent space inducing ensemble SVM classifier for the early dementia diagnosis with multi-modality neuroimaging data (Zhou et al 2020).…”
Section: Introductionmentioning
confidence: 99%
“…Zheng et al proposed a modality-aware representation learning and graph learning based framework for disease prediction with multimodality (Zheng et al 2022). If solving the latent representation learning as an optimization problem (Zhou et al 2020, Ning et al 2021a, Ning et al 2022, it can not only provide an integrated architecture for simultaneous data fusion and dimension reduction, but also leverage data intrinsic properties, e.g. inter-/intramodality information and data-task correlation, in the optimization process by formulating them as regulation constrains.…”
Section: Introductionmentioning
confidence: 99%