2021
DOI: 10.1007/978-3-030-72087-2_33
|View full text |Cite
|
Sign up to set email alerts
|

Modified MobileNet for Patient Survival Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…Table 2 shows the comparisons of the SpearmanR and Accuracy performances with other published methods on the BraTS 2020 validation dataset, while Table 2 presents a comparison of the SpearmanR performances with other published methods on the BraTS 2019 and BraTS 2018 validation datasets. From the experimental results, the FLAIR modality showed the highest correlation with the survival prediction output with a correlation SpearmanR of Method SpearmanR Accuracy [24] 0.123 0.345 [26] 0.134 0.483 [27] 0.134 0.517 [28] 0.217 0.517 [29] 0.228 0.414 [30] 0.249 0.517 [25] 0.253 0.414 [31] 0.280 0.450 Ours 0.459 0.517 5 A and Figure 5 B present comparisons of SpearmanR and Accuracy performances between using our FLAIR modality and other modalities, combination approaches on the BraTS 2020 validation dataset. In the segmentation phase, our segmented results achieved a dice score of 0.89845 in the whole tumor, 0.77734 in the tumor core, and 0.78957 in the enhancing tumor.…”
Section: B Dataset and Implementation Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…Table 2 shows the comparisons of the SpearmanR and Accuracy performances with other published methods on the BraTS 2020 validation dataset, while Table 2 presents a comparison of the SpearmanR performances with other published methods on the BraTS 2019 and BraTS 2018 validation datasets. From the experimental results, the FLAIR modality showed the highest correlation with the survival prediction output with a correlation SpearmanR of Method SpearmanR Accuracy [24] 0.123 0.345 [26] 0.134 0.483 [27] 0.134 0.517 [28] 0.217 0.517 [29] 0.228 0.414 [30] 0.249 0.517 [25] 0.253 0.414 [31] 0.280 0.450 Ours 0.459 0.517 5 A and Figure 5 B present comparisons of SpearmanR and Accuracy performances between using our FLAIR modality and other modalities, combination approaches on the BraTS 2020 validation dataset. In the segmentation phase, our segmented results achieved a dice score of 0.89845 in the whole tumor, 0.77734 in the tumor core, and 0.78957 in the enhancing tumor.…”
Section: B Dataset and Implementation Detailsmentioning
confidence: 99%
“…A method for forecasting patient survival, which combines MobileNet with a linear survival prediction model (SPM), is outlined in [24]. Different versions of MobileNet are assessed to identify the most effective one, including adapting Mo-bileNet V1 with either frozen or unfrozen layers and modifying MobileNet V2 with either frozen or unfrozen layers connected to SPM.…”
Section: Related Workmentioning
confidence: 99%
“…Multi-sequence MRIs can comprehensively express the information of tumor lesions by imaging targets with different parameters, in which the effective information should be complementary. Akbar et al (2020) trained a neural network to predict the overall survival for a cohort of 95 GBM patients, based on the imaging features extracted from T1ce, FLAIR and T2 sequence of MRIs, and the mean square error (MSE) was 78 374.17. Lao et al (2017) constructed a survival prediction model using combined available clinical features, 1403 radiomic features and 98 304 deep features based on 4 sequences of MRIs for 75 GBM patients, and the C-index on the independent verification set was 0.71.…”
Section: Introductionmentioning
confidence: 99%
“…Although deep learning methods have achieved state-of-the-art results in numerous applications in clinical and translational imaging, their efficacy in the OS classification task of brain gliomas is yet to be established. Numerous studies have shown that, in comparison to classification models trained with handcrafted (radiomic) features, deep models reported poor predictive performance on BraTS validation and challenge cohorts ( Suter et al, 2018 ; Guo et al, 2019 ; Starke et al, 2019 ; Akbar et al, 2020 ). For instance, Akbar et al (2020) extracted deep features from 2D multi-parametric MRI scans by employing the modified versions of MobileNet V1 ( Howard et al, 2017 ) and MobileNet V2 ( Sandler et al, 2018 ) architectures.…”
Section: Introductionmentioning
confidence: 99%
“…Numerous studies have shown that, in comparison to classification models trained with handcrafted (radiomic) features, deep models reported poor predictive performance on BraTS validation and challenge cohorts ( Suter et al, 2018 ; Guo et al, 2019 ; Starke et al, 2019 ; Akbar et al, 2020 ). For instance, Akbar et al (2020) extracted deep features from 2D multi-parametric MRI scans by employing the modified versions of MobileNet V1 ( Howard et al, 2017 ) and MobileNet V2 ( Sandler et al, 2018 ) architectures. Deep features, augmented with a clinical feature (Age in years), were subsequently fed to a deep learning prediction module called the survival prediction model (SPM).…”
Section: Introductionmentioning
confidence: 99%