2022
DOI: 10.1007/978-3-030-98253-9_28
|View full text |Cite
|
Sign up to set email alerts
|

Combining Tumor Segmentation Masks with PET/CT Images and Clinical Data in a Deep Learning Framework for Improved Prognostic Prediction in Head and Neck Squamous Cell Carcinoma

Abstract: Take-down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…MLL2 models trained by using image data only showed better prediction performance for most endpoints than traditional radiomics models in both internal and external test sets. In other relevant studies, CNNs were already used to acquire highly representative image features from PET-with/without CT-images, which showed good prediction ability for OS, 21 local failure, 22 and PFS 25,45 of OPSCC. Pang et al proposed an advanced combination of training loss with oversampling to train a 3D ResNet18 based on pre-treatment CT and GTV, which achieved the state-of -art AUCs of 0.91, 0.78, and 0.70 for DMFS, LRC, and OS prediction in HNC patients, respectively.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…MLL2 models trained by using image data only showed better prediction performance for most endpoints than traditional radiomics models in both internal and external test sets. In other relevant studies, CNNs were already used to acquire highly representative image features from PET-with/without CT-images, which showed good prediction ability for OS, 21 local failure, 22 and PFS 25,45 of OPSCC. Pang et al proposed an advanced combination of training loss with oversampling to train a 3D ResNet18 based on pre-treatment CT and GTV, which achieved the state-of -art AUCs of 0.91, 0.78, and 0.70 for DMFS, LRC, and OS prediction in HNC patients, respectively.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, the winner in HECKTOR 2021 challenge 23 used FDG-PET/CT images, GTVt contours, and clinical parameters together to build a DenseNet 24 for (progression-free survival) PFS prediction. 25 Multi-label learning is a technique that predicts all the relevant labels for a given example by exploiting the label correlations and the input feature information. 26 Its advantage over single-label learning is that it can capture the correlations between different labels and exploit them for better prediction performance.…”
Section: Introductionmentioning
confidence: 99%
“…For example, while delineation of tumor volumes is both imperative and time-consuming, researchers such as Huang et al in 2018 [57] and Moe et al in 2021 [58] have utilized PET/CT-based convolutional neural networks (CNNs) to perform high-quality and precise automated tumor delineations. PET/CT CNNs have also demonstrated utility in improving patient prognostication [59,60] and predicting individual response to chemotherapy [61]. The ability of AI tools to improve both the speed and quality of diagnostic interpretations maximizes the utility of PET/CT for both patients and providers and will continue to do so as advances in AI techniques continue to be made.…”
Section: Recent Advances Conclusion and Future Directionsmentioning
confidence: 99%
“…Deep learning (DL) has found wide success in a variety of domains for RT-related medical imaging applications such as target and OAR segmentation ( 6 11 ) and outcome prediction ( 12 , 13 ). One less routinely studied domain is synthetic image generation, i.e., mapping an input image to an output image.…”
Section: Introductionmentioning
confidence: 99%