The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.1016/j.mri.2021.02.001
|View full text |Cite
|
Sign up to set email alerts
|

A 3D densely connected convolution neural network with connection-wise attention mechanism for Alzheimer's disease classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
43
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 113 publications
(54 citation statements)
references
References 15 publications
1
43
0
Order By: Relevance
“…Because the same NA-ADNI dataset was used, the prediction performance was basically comparable, despite differences such as the follow-up periods used to define sMCI and pMCI, input information, and evaluation methods (validation, k-fold cross-validation, and k-fold cross-validation and test). For the case of using only brain images as input, our model ( M2 in Supplementary Table 2 ), based on only T1-weighted MRI images (accuracy: 78%, AUC: 85%, sensitivity: 78%, specificity: 78%), performed better than a previous deep neural network (DNN) 8 (accuracy: 75%, AUC: not available [NA], sensitivity: 75%, specificity: 75%), equivalent to a DNN 9 that used a much easier task definition (accuracy: 79%, AUC: NA, sensitivity: 75%, specificity: 82%), but was inferior to a DNN 7 using MRI and PET images (accuracy: 83%, AUC: NA, sensitivity: 80%, specificity: 84%) and a DNN 10 using mixed groups of cognitively normal (CN) + sMCI and pSMI+AD groups for training (accuracy: 83%, AUC: 88%, sensitivity: 76%, specificity: 87%). For the case of using not only images, but also non-image information, the performance of our model ( M5 in Supplementary Table 2 ) (accuracy: 88%, AUC: 95%, sensitivity: 88%, specificity: 88%) was better than that of state-of-the-art models using SVM 13 , DNN 16 , and random forest 17 (accuracy: 85%–87%, AUC: 87%–90%).…”
Section: Resultsmentioning
confidence: 99%
“…Because the same NA-ADNI dataset was used, the prediction performance was basically comparable, despite differences such as the follow-up periods used to define sMCI and pMCI, input information, and evaluation methods (validation, k-fold cross-validation, and k-fold cross-validation and test). For the case of using only brain images as input, our model ( M2 in Supplementary Table 2 ), based on only T1-weighted MRI images (accuracy: 78%, AUC: 85%, sensitivity: 78%, specificity: 78%), performed better than a previous deep neural network (DNN) 8 (accuracy: 75%, AUC: not available [NA], sensitivity: 75%, specificity: 75%), equivalent to a DNN 9 that used a much easier task definition (accuracy: 79%, AUC: NA, sensitivity: 75%, specificity: 82%), but was inferior to a DNN 7 using MRI and PET images (accuracy: 83%, AUC: NA, sensitivity: 80%, specificity: 84%) and a DNN 10 using mixed groups of cognitively normal (CN) + sMCI and pSMI+AD groups for training (accuracy: 83%, AUC: 88%, sensitivity: 76%, specificity: 87%). For the case of using not only images, but also non-image information, the performance of our model ( M5 in Supplementary Table 2 ) (accuracy: 88%, AUC: 95%, sensitivity: 88%, specificity: 88%) was better than that of state-of-the-art models using SVM 13 , DNN 16 , and random forest 17 (accuracy: 85%–87%, AUC: 87%–90%).…”
Section: Resultsmentioning
confidence: 99%
“…Fifth, 3D multiscale attention mechanism is introduced in dense unit. Zhang et al [ 73 ] used 3D densely connected convolutional neural network (CAM-CNN) to extract brain MRI multilevel features for classification of Alzheimer's disease and mild cognitive impairment, densely connected difference at different unit levels; 3D dense unit introduces attention mechanism to generate attention maps and sum transformed MRI hierarchical data into more compact high-level; model has high classification prediction accuracy, and classification performance is at the highest level.…”
Section: Development Of Densenetmentioning
confidence: 99%
“…Experiments conducted in the study demonstrated that the bidirectional encoder with logistic regression outperformed the existing benchmarks. Zhang et al [ 100 ] proposed a densely connected CNN with attention mechanism for AD diagnosis using structural MR images. The densely connected CNN extracted multiple features from the input data, and the attention mechanism fused the features from different layers to transform them into complex features based on which final classification was performed.…”
Section: DL For Ad Diagnosismentioning
confidence: 99%