2023
DOI: 10.1109/tnnls.2021.3112194
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Constraint Latent Representation Learning for Prognosis Analysis Using Multi-Modal Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 46 publications
0
7
0
Order By: Relevance
“…Specifically, by following previous work, we concatenate predicted risk values from the test sets in 5-fold cross-validation, which are then plotted against the respective survival time. Patients of each dataset are divided into low and high-risk groups based on the on the median of risk indices as the threshold ( Ning et al , 2021a ). We can find that CAMR successfully divides the LGG patients into low and high-risk groups with optimal patient stratification ( P = 2.2204e−16).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Specifically, by following previous work, we concatenate predicted risk values from the test sets in 5-fold cross-validation, which are then plotted against the respective survival time. Patients of each dataset are divided into low and high-risk groups based on the on the median of risk indices as the threshold ( Ning et al , 2021a ). We can find that CAMR successfully divides the LGG patients into low and high-risk groups with optimal patient stratification ( P = 2.2204e−16).…”
Section: Resultsmentioning
confidence: 99%
“…The reasons why we choose pre-extract features for a deep learning model can listed as follows. At first, limited by the computational resources, using CNN to extract features from histopathological images be expensive or even impractical ( Ning et al , 2021a ). Secondly, the features retrieved by CNN may result in overfitting in small cohorts ( Boehm et al , 2022 ), which may limit the further improvement of performance of cancer survival prediction.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Multi-modal data provides richer information than a single modality does, which has attracted more and more attention in both natural data processing and medical data processing fields, such as visual question answering (VQA) [33], RGB-D object recognition [34] and pathogenomics for prognosis analysis [35]. For most of the above tasks, there exists a huge gap between two different modalities, such as {text, image, speech} and {whole slide image, genomic data}.…”
Section: Multi-modal Fusion Modelsmentioning
confidence: 99%
“…Features with the mean absolute correlation higher than 0.9 were regarded as redundant, and the one with lower C-index was eliminated. Then, considering Cox-based models can benefit from regression constraint of the actual length of the observed time (Ning et al 2021), minimum redundancy maximum relevance (MRMR) for multivariable regression of overall survival time based on non-censored cases was used to identify the most time-related features, and the 30 top-ranking features were kept for sequential backward selection. Finally, considering there are 161 patients who experienced with death/event in the training dataset, and the number of features should not exceed 10% of the number of events (Peduzzi et al 1996), a subset of 15 features was determined by backward selection for subsequent model construction.…”
Section: Radiomics Feature and Clinical Factor Selectionmentioning
confidence: 99%