2022
DOI: 10.1148/ryai.210243
|View full text |Cite
|
Sign up to set email alerts
|

Longitudinal Assessment of Posttreatment Diffuse Glioma Tissue Volumes with Three-dimensional Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 14 publications
1
10
0
Order By: Relevance
“…Few automatic segmentation papers, all of which aimed to measure longitudinal change in tumor burden to assess treatment response, have reported the performance of models in the post-treatment setting [12][13][14]. Rudie et al [12] focused on proposing a solution for assessing change in tumor size (progressed vs not) by training a model on subtracted images in two consecutive timepoints to detect longitudinal change in a cohort of patients with diffuse gliomas.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Few automatic segmentation papers, all of which aimed to measure longitudinal change in tumor burden to assess treatment response, have reported the performance of models in the post-treatment setting [12][13][14]. Rudie et al [12] focused on proposing a solution for assessing change in tumor size (progressed vs not) by training a model on subtracted images in two consecutive timepoints to detect longitudinal change in a cohort of patients with diffuse gliomas.…”
Section: Discussionmentioning
confidence: 99%
“…Few automatic segmentation papers, all of which aimed to measure longitudinal change in tumor burden to assess treatment response, have reported the performance of models in the post-treatment setting [12][13][14]. Rudie et al [12] focused on proposing a solution for assessing change in tumor size (progressed vs not) by training a model on subtracted images in two consecutive timepoints to detect longitudinal change in a cohort of patients with diffuse gliomas. Although the context of their work was not directly com parable to ours, as part of their analysis the authors trained a baseline model on post-treatment images and reached mean Dice coefficients of 0.85 in Edema and 0.71 in active tumor regions.…”
Section: Discussionmentioning
confidence: 99%
“…Pathologies ranged from tumors to leukoencephalopathy to chronic small vessel ischemia. The default FLAIR U-Net to detect new lesions was trained on 198 patients with brain tumors with consecutive imaging and manual segmentations delineating areas of change on FLAIR imaging ( Rudie et al, 2022 ). The default enhancement U-Net was trained on 463 MR studies demonstrating abnormally enhancing metastatic tumors (“metastases”) from the University of California, San Francisco ( Rudie et al, 2021 ).…”
Section: Methodsmentioning
confidence: 99%
“…Training was performed for 30 epochs with a standard cross-entropy loss, Adam optimizer, and learning rate of 10 −5 ; further details on architecture and training process are described in Duong et al (2019) . To develop MS-specific models, we used MS training data to fine-tune our default disease-invariant FLAIR model ( Duong et al, 2019 ), glioma-specific new FLAIR signal model ( Rudie et al, 2022 ), and metastases-specific enhancement model ( Rudie et al, 2021 ). We compared these fine-tuned models with de-novo models trained with the same data.…”
Section: Methodsmentioning
confidence: 99%
“…While most published AI algorithms for brain tumor segmentation focus on preoperative MRI, a number of studies have evaluated the application of AI models in posttreatment settings and achieved comparable performance as compared to human experts [27,82 ▪▪ ,83]. Segmentation of posttreatment imaging findings allows for more precise quantification of small differences between timepoints, improving sensitivity for detection of small interval changes that may otherwise be missed.…”
Section: Posttreatment Segmentation and Surveillancementioning
confidence: 99%