BACKGROUND AND PURPOSE: Cortical amyloid quantification on PET by using the standardized uptake value ratio is valuable for research studies and clinical trials in Alzheimer disease. However, it is resource intensive, requiring co-registered MR imaging data and specialized segmentation software. We investigated the use of deep learning to automatically quantify standardized uptake value ratio and used this for classification. MATERIALS AND METHODS:Using the Alzheimer's Disease Neuroimaging Initiative dataset, we identified 2582 18 F-florbetapir PET scans, which were separated into positive and negative cases by using a standardized uptake value ratio threshold of 1.1. We trained convolutional neural networks (ResNet-50 and ResNet-152) to predict standardized uptake value ratio and classify amyloid status. We assessed performance based on network depth, number of PET input slices, and use of ImageNet pretraining. We also assessed human performance with 3 readers in a subset of 100 randomly selected cases. RESULTS:We have found that 48% of cases were amyloid positive. The best performance was seen for ResNet-50 by using regression before classification, 3 input PET slices, and pretraining, with a standardized uptake value ratio root-mean-square error of 0.054, corresponding to 95.1% correct amyloid status prediction. Using more than 3 slices did not improve performance, but ImageNet initialization did. The best trained network was more accurate than humans (96% versus a mean of 88%, respectively). CONCLUSIONS:Deep learning algorithms can estimate standardized uptake value ratio and use this to classify 18 F-florbetapir PET scans. Such methods have promise to automate this laborious calculation, enabling quantitative measurements rapidly and in settings without extensive image processing manpower and expertise.
We investigate the spatial contrast-sensitivity of modern convolutional neural networks (CNNs) and a linear support vector machine (SVM). To measure performance, we compare the CNN contrast sensitivity across a range of patterns with the contrast sensitivity of a Bayesian ideal observer (IO) with the signal-known-exactly and noise-known-statistically. A ResNet-18 reaches optimal performance for harmonic patterns, as well as several classes of real world signals including faces. For these stimuli the CNN substantially outperforms the SVM. We further analyze the case in which the signal might appear in one of multiple locations and found that CNN spatial sensitivity continues to match the IO. However, the CNN sensitivity is far below optimal at detecting certain complex texture patterns. These measurements show that CNNs spatial contrast-sensitivity differs markedly between spatial patterns. The variation in spatial contrastsensitivity may be a significant factor, influencing the performance level of an imaging system designed to detect low contrast spatial patterns.
Introduction In Alzheimer's disease, asymptomatic patients may have amyloid deposition, but predicting their progression rate remains a substantial challenge with implications for clinical trial enrollment. Here, we demonstrate an artificial intelligence approach to use baseline clinical information and images to predict changes in quantitative biomarkers of brain pathology on future images. Methods Patients from the Alzheimer's Disease Neuroimaging Initiative (ADNI) who underwent positron emission tomography (PET) with the amyloid radiotracer 18F‐AV45 (florbetapir) were included. We identified important baseline PET image features using a deep convolutional neural network based on ResNet. These were combined with eight clinical, demographic, and genetic markers using a gradient‐boosted decision tree (GBDT) algorithm to predict future quantitative standardized uptake value ratio (SUVR), an established biomarker of brain amyloid deposition. We used this model to better identify individuals with the highest positive change in amyloid deposition on future images and compared this to typical inclusion criteria for clinical trials. We also compared the model's performance to other methods such as multivariate linear regression and GBDT without imaging features. Findings Using 2577 PET scans from 1224 unique individuals, we showed that the GBDT with deep image features was significantly more accurate than the other approaches, reaching a root mean squared error of 0.0339 ± 0.0027 for future SUVR prediction. Using this approach, we could identify individuals with the highest 10% SUVR accumulation at rates 2‐ to 4‐fold higher than by random pick or existing inclusion criteria. Discussion Predicting quantitative biomarkers on future images using machine learning methods consisting of deep image features combined with clinical data may allow better targeting of treatments or enrollment in clinical trials.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.