ET with fluorine 18 (18 F) fluorodeoxyglucose (FDG) has a substantial impact on the diagnosis and clinical decisions of oncological diseases. 18 F-FDG uptake highlights regions of high glucose metabolism that include both pathologic and physiologic processes (1,2). 18 F-FDG PET/CT adds value to the initial diagnosis, detection of recurrent tumor, and evaluation of response to therapy in lung cancer and lymphoma (3-6). In lung cancer or lymphoma staging, 18 F-FDG PET images are interpreted by trained nuclear medicine readers to help identify foci positive for 18 F-FDG uptake (hereafter, referred to as 18 F-FDG2positive foci) that are suspicious for tumor. This classification of 18 F-FDG2positive foci is particularly challenging for malignant tumors with a low avidity, unusual tumor sites, and motion and attenuation artifacts, and the wide range of 18 F-FDG uptake related to inflammation, infection, or physiologic glucose consumption (7-9). Whereas the intra-and interobserver interpretation of 18 F-FDG PET/CT findings has a high level of agreement in studies involving single site and experienced readers for lymphoma, lung, and head and neck cancer (10-12), there remains an unmet need to assist the reader in analyzing these examinations more efficiently. Convolutional neural networks (CNNs) are a branch of machine learning that is finding applications 18
Total metabolic tumor volume (TMTV), calculated from 18 F-FDG PET/CT baseline studies, is a prognostic factor in diffuse large B-cell lymphoma (DLBCL) whose measurement requires the segmentation of all malignant foci throughout the body. No consensus currently exists regarding the most accurate approach for such segmentation. Further, all methods still require extensive manual input from an experienced reader. We examined whether an artificial intelligence–based method could estimate TMTV with a comparable prognostic value to TMTV measured by experts. Methods: Baseline 18 F-FDG PET/CT scans of 301 DLBCL patients from the REMARC trial (NCT01122472) were retrospectively analyzed using a prototype software (PET Assisted Reporting System [PARS]). An automated whole-body high-uptake segmentation algorithm identified all 3-dimensional regions of interest (ROIs) with increased tracer uptake. The resulting ROIs were processed using a convolutional neural network trained on an independent cohort and classified as nonsuspicious or suspicious uptake. The PARS-based TMTV (TMTV PARS ) was estimated as the sum of the volumes of ROIs classified as suspicious uptake. The reference TMTV (TMTV REF ) was measured by 2 experienced readers using independent semiautomatic software. The TMTV PARS was compared with the TMTV REF in terms of prognostic value for progression-free survival (PFS) and overall survival (OS). Results: TMTV PARS was significantly correlated with the TMTV REF (ρ = 0.76; P < 0.001). Using PARS, an average of 24 regions per subject with increased tracer uptake was identified, and an average of 20 regions per subject was correctly identified as nonsuspicious or suspicious, yielding 85% classification accuracy, 80% sensitivity, and 88% specificity, compared with the TMTV REF region. Both TMTV results were predictive of PFS (hazard ratio, 2.3 and 2.6 for TMTV PARS and TMTV REF , respectively; P < 0.001) and OS (hazard ratio, 2.8 and 3.7 for TMTV PARS and TMTV REF , respectively; P < 0.001). Conclusion: TMTV PARS was consistent with that obtained by experts and displayed a significant prognostic value for PFS and OS in DLBCL patients. Classification of high-uptake regions using deep learning for rapidly discarding physiologic uptake may considerably simplify TMTV estimation, reduce observer variability, and facilitate the use of TMTV as a predictive factor in DLBCL patients.
Purpose In PSMA-ligand PET/CT imaging, standardized evaluation frameworks and image-derived parameters are increasingly used to support prostate cancer staging. Clinical applicability remains challenging wherever manual measurements of numerous suspected lesions are required. Deep learning methods are promising for automated image analysis, typically requiring extensive expert-annotated image datasets to reach sufficient accuracy. We developed a deep learning method to support image-based staging, investigating the use of training information from two radiotracers. Methods In 173 subjects imaged with 68Ga-PSMA-11 PET/CT, divided into development (121) and test (52) sets, we trained and evaluated a convolutional neural network to both classify sites of elevated tracer uptake as nonsuspicious or suspicious for cancer and assign them an anatomical location. We evaluated training strategies to leverage information from a larger dataset of 18F-FDG PET/CT images and expert annotations, including transfer learning and combined training encoding the tracer type as input to the network. We assessed the agreement between the N and M stage assigned based on the network annotations and expert annotations, according to the PROMISE miTNM framework. Results In the development set, including 18F-FDG training data improved classification performance in four-fold cross validation. In the test set, compared to expert assessment, training with 18F-FDG data and the development set yielded 80.4% average precision [confidence interval (CI): 71.1–87.8] for identification of suspicious uptake sites, 77% (CI: 70.0–83.4) accuracy for anatomical location classification of suspicious findings, 81% agreement for identification of regional lymph node involvement, and 77% agreement for identification of metastatic stage. Conclusion The evaluated algorithm showed good agreement with expert assessment for identification and anatomical location classification of suspicious uptake sites in whole-body 68Ga-PSMA-11 PET/CT. With restricted PSMA-ligand data available, the use of training examples from a different radiotracer improved performance. The investigated methods are promising for enabling efficient assessment of cancer stage and tumor burden.
Introduction: Our aim was to evaluate the performance in clinical research and in clinical routine of a research prototype, called positron emission tomography (PET) Assisted Reporting System (PARS) (Siemens Healthineers) and based on a convolutional neural network (CNN), which is designed to detect suspected cancer sites in fluorine-18 fluorodeoxyglucose (18F-FDG) PET/computed tomography (CT).Method: We retrospectively studied two cohorts of patients. The first cohort consisted of research-based patients who underwent PET scans as part of the initial workup for diffuse large B-cell lymphoma (DLBCL). The second cohort consisted of patients who underwent PET scans as part of the evaluation of miscellaneous cancers in clinical routine. In both cohorts, we assessed the correlation between manually and automatically segmented total metabolic tumor volumes (TMTVs), and the overlap between both segmentations (Dice score). For the research cohort, we also compared the prognostic value for progression-free survival (PFS) and overall survival (OS) of manually and automatically obtained TMTVs.Results: For the first cohort (research cohort), data from 119 patients were retrospectively analyzed. The median Dice score between automatic and manual segmentations was 0.65. The intraclass correlation coefficient between automatically and manually obtained TMTVs was 0.68. Both TMTV results were predictive of PFS (hazard ratio: 2.1 and 3.3 for automatically based and manually based TMTVs, respectively) and OS (hazard ratio: 2.4 and 3.1 for automatically based and manually based TMTVs, respectively). For the second cohort (routine cohort), data from 430 patients were retrospectively analyzed. The median Dice score between automatic and manual segmentations was 0.48. The intraclass correlation coefficient between automatically and manually obtained TMTVs was 0.61.Conclusion: The TMTVs determined for the research cohort remain predictive of total and PFS for DLBCL. However, the segmentations and TMTVs determined automatically by the algorithm need to be verified and, sometimes, corrected to be similar to the manual segmentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.