Purpose This work attempts to decode the discriminative information in dopamine transporter (DAT) imaging using deep learning for the differential diagnosis of parkinsonism. Methods This study involved 1017 subjects who underwent DAT PET imaging ([11C]CFT) including 43 healthy subjects and 974 parkinsonian patients with idiopathic Parkinson’s disease (IPD), multiple system atrophy (MSA) or progressive supranuclear palsy (PSP). We developed a 3D deep convolutional neural network to learn distinguishable DAT features for the differential diagnosis of parkinsonism. A full-gradient saliency map approach was employed to investigate the functional basis related to the decision mechanism of the network. Furthermore, deep-learning-guided radiomics features and quantitative analysis were compared with their conventional counterparts to further interpret the performance of deep learning. Results The proposed network achieved area under the curve of 0.953 (sensitivity 87.7%, specificity 93.2%), 0.948 (sensitivity 93.7%, specificity 97.5%), and 0.900 (sensitivity 81.5%, specificity 93.7%) in the cross-validation, together with sensitivity of 90.7%, 84.1%, 78.6% and specificity of 88.4%, 97.5% 93.3% in the blind test for the differential diagnosis of IPD, MSA and PSP, respectively. The saliency map demonstrated the most contributed areas determining the diagnosis located at parkinsonism-related regions, e.g., putamen, caudate and midbrain. The deep-learning-guided binding ratios showed significant differences among IPD, MSA and PSP groups (P < 0.001), while the conventional putamen and caudate binding ratios had no significant difference between IPD and MSA (P = 0.24 and P = 0.30). Furthermore, compared to conventional radiomics features, there existed average above 78.1% more deep-learning-guided radiomics features that had significant differences among IPD, MSA and PSP. Conclusion This study suggested the developed deep neural network can decode in-depth information from DAT and showed potential to assist the differential diagnosis of parkinsonism. The functional regions supporting the diagnosis decision were generally consistent with known parkinsonian pathology but provided more specific guidance for feature selection and quantitative analysis.
Background: Human brown adipose tissue (BAT), mostly located in the cervical/supraclavicular region, is a promising target in obesity treatment. Magnetic resonance imaging (MRI) allows for mapping the fat content quantitatively. However, due to the complex heterogeneous distribution of BAT, it has been difficult to establish a standardized segmentation routine based on magnetic resonance (MR) images. Here, we suggest using a multi-modal deep neural network to detect the supraclavicular fat pocket.Methods: A total of 50 healthy subjects [median age/body mass index (BMI) =36 years/24.3 kg/m 2 ] underwent MRI scans of the neck region on a 3 T Ingenia scanner (Philips Healthcare, Best, Netherlands).Manual segmentations following fixed rules for anatomical borders were used as ground truth labels. A deep learning-based method (termed as BAT-Net) was proposed for the segmentation of BAT on MRI scans. It jointly leveraged two-dimensional (2D) and three-dimensional (3D) convolutional neural network (CNN) architectures to efficiently encode the multi-modal and 3D context information from multi-modal MRI scans of the supraclavicular region. We compared the performance of BAT-Net to that of 2D U-Net and 3D U-Net. For 2D U-Net, we analyzed the performance difference of implementing 2D U-Net in three different planes, denoted as 2D U-Net (axial), 2D U-Net (coronal), and 2D U-Net (sagittal).Results: The proposed model achieved an average dice similarity coefficient (DSC) of 0.878 with a standard deviation of 0.020. The volume segmented by the network was smaller compared to the ground truth labels by 9.20 mL on average with a mean absolute increase in proton density fat fraction (PDFF) inside the segmented regions of 1.19 percentage points. The BAT-Net outperformed all implemented 2D U-Nets and the 3D U-Nets with average DSC enhancement ranging from 0.016 to 0.023. Conclusions:The current work integrates a deep neural network-based segmentation into the automated
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.