PURPOSE Conventional wisdom has rendered patients with brain metastases ineligible for clinical trials for fear that poor survival could mask the benefit of otherwise promising treatments. Our group previously published the diagnosis-specific Graded Prognostic Assessment (GPA). Updates with larger contemporary cohorts using molecular markers and newly identified prognostic factors have been published. The purposes of this work are to present all the updated indices in a single report to guide treatment choice, stratify research, and define an eligibility quotient to expand eligibility. METHODS A multi-institutional database of 6,984 patients with newly diagnosed brain metastases underwent multivariable analyses of prognostic factors and treatments associated with survival for each primary site. Significant factors were used to define the updated GPA. GPAs of 4.0 and 0.0 correlate with the best and worst prognoses, respectively. RESULTS Significant prognostic factors varied by diagnosis and new prognostic factors were identified. Those factors were incorporated into the updated GPA with robust separation ( P < .01) between subgroups. Survival has improved, but varies widely by GPA for patients with non–small-cell lung, breast, melanoma, GI, and renal cancer with brain metastases from 7-47 months, 3-36 months, 5-34 months, 3-17 months, and 4-35 months, respectively. CONCLUSION Median survival varies widely and our ability to estimate survival for patients with brain metastases has improved. The updated GPA (available free at brainmetgpa.com) provides an accurate tool with which to estimate survival, individualize treatment, and stratify clinical trials. Instead of excluding patients with brain metastases, enrollment should be encouraged and those trials should be stratified by the GPA to ensure those trials make appropriate comparisons. Furthermore, we recommend the expansion of eligibility to allow for the enrollment of patients with previously treated brain metastases who have a 50% or greater probability of an additional year of survival (eligibility quotient > 0.50).
Positron emission tomography and computed tomography (PET-CT) dual-modality imaging provides critical diagnostic information in modern cancer diagnosis and therapy. Automated accurate tumor delineation is essentially important in computer-assisted tumor reading and interpretation based on PET-CT. In this paper, we propose a novel approach for the segmentation of lung tumors that combines the powerful fully convolutional networks (FCN) based semantic segmentation framework (3D-UNet) and the graph cut based co-segmentation model. First, two separate deep UNets are trained on PET and CT, separately, to learn high level discriminative features to generate tumor/non-tumor masks and probability maps for PET and CT images. Then, the two probability maps on PET and CT are further simultaneously employed in a graph cut based co-segmentation model to produce the final tumor segmentation results. Comparative experiments on 32 PET-CT scans of lung cancer patients demonstrate the effectiveness of our method.
Non-small-cell lung cancer (NSCLC) represents approximately 80–85% of lung cancer diagnoses and is the leading cause of cancer-related death worldwide. Recent studies indicate that image-based radiomics features from positron emission tomography/computed tomography (PET/CT) images have predictive power for NSCLC outcomes. To this end, easily calculated functional features such as the maximum and the mean of standard uptake value (SUV) and total lesion glycolysis (TLG) are most commonly used for NSCLC prognostication, but their prognostic value remains controversial. Meanwhile, convolutional neural networks (CNN) are rapidly emerging as a new method for cancer image analysis, with significantly enhanced predictive power compared to hand-crafted radiomics features. Here we show that CNNs trained to perform the tumor segmentation task, with no other information than physician contours, identify a rich set of survival-related image features with remarkable prognostic value. In a retrospective study on pre-treatment PET-CT images of 96 NSCLC patients before stereotactic-body radiotherapy (SBRT), we found that the CNN segmentation algorithm (U-Net) trained for tumor segmentation in PET and CT images, contained features having strong correlation with 2- and 5-year overall and disease-specific survivals. The U-Net algorithm has not seen any other clinical information (e.g. survival, age, smoking history, etc.) than the images and the corresponding tumor contours provided by physicians. In addition, we observed the same trend by validating the U-Net features against an extramural data set provided by Stanford Cancer Institute. Furthermore, through visualization of the U-Net, we also found convincing evidence that the regions of metastasis and recurrence appear to match with the regions where the U-Net features identified patterns that predicted higher likelihoods of death. We anticipate our findings will be a starting point for more sophisticated non-intrusive patient specific cancer prognosis determination. For example, the deep learned PET/CT features can not only predict survival but also visualize high-risk regions within or adjacent to the primary tumor and hence potentially impact therapeutic outcomes by optimal selection of therapeutic strategy or first-line therapy adjustment.
Purpose To investigate the use and efficiency of 3‐D deep learning, fully convolutional networks (DFCN) for simultaneous tumor cosegmentation on dual‐modality nonsmall cell lung cancer (NSCLC) and positron emission tomography (PET)‐computed tomography (CT) images. Methods We used DFCN cosegmentation for NSCLC tumors in PET‐CT images, considering both the CT and PET information. The proposed DFCN‐based cosegmentation method consists of two coupled three‐dimensional (3D)‐UNets with an encoder‐decoder architecture, which can communicate with the other in order to share complementary information between PET and CT. The weighted average sensitivity and positive predictive values denoted as Scores, dice similarity coefficients (DSCs), and the average symmetric surface distances were used to assess the performance of the proposed approach on 60 pairs of PET/CTs. A Simultaneous Truth and Performance Level Estimation Algorithm (STAPLE) of 3 expert physicians’ delineations were used as a reference. The proposed DFCN framework was compared to 3 graph‐based cosegmentation methods. Results Strong agreement was observed when using the STAPLE references for the proposed DFCN cosegmentation on the PET‐CT images. The average DSCs on CT and PET are 0.861 ± 0.037 and 0.828 ± 0.087, respectively, using DFCN, compared to 0.638 ± 0.165 and 0.643 ± 0.141, respectively, when using the graph‐based cosegmentation method. The proposed DFCN cosegmentation using both PET and CT also outperforms the deep learning method using either PET or CT alone. Conclusions The proposed DFCN cosegmentation is able to outperform existing graph‐based segmentation methods. The proposed DFCN cosegmentation shows promise for further integration with quantitative multimodality imaging tools in clinical trials.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.