2021
DOI: 10.3389/fnins.2020.626154
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning-Based Classification and Voxel-Based Visualization of Frontotemporal Dementia and Alzheimer’s Disease

Abstract: Frontotemporal dementia (FTD) and Alzheimer’s disease (AD) have overlapping symptoms, and accurate differential diagnosis is important for targeted intervention and treatment. Previous studies suggest that the deep learning (DL) techniques have the potential to solve the differential diagnosis problem of FTD, AD and normal controls (NCs), but its performance is still unclear. In addition, existing DL-assisted diagnostic studies still rely on hypothesis-based expert-level preprocessing. On the one hand, it impo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
41
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 41 publications
(43 citation statements)
references
References 33 publications
0
41
0
Order By: Relevance
“…This highlights a lack of methodological transparency across the considered research, especially considering the many different subjective choices required during model construction that can have misleading effects on the overall performance of the system. Of the studies that did make code available, no paper provided detailed tutorials of preprocessing and model construction -understandably, the quality and thoroughness of reported code is another important aspect of reproducibility and transparency which is not solved by making code available [55,57,58,59,60,48]. Additionally, most papers considered were from journals, meaning that the majority of studies underwent some form of peer review (43/55)…”
Section: Transparencymentioning
confidence: 99%
See 4 more Smart Citations
“…This highlights a lack of methodological transparency across the considered research, especially considering the many different subjective choices required during model construction that can have misleading effects on the overall performance of the system. Of the studies that did make code available, no paper provided detailed tutorials of preprocessing and model construction -understandably, the quality and thoroughness of reported code is another important aspect of reproducibility and transparency which is not solved by making code available [55,57,58,59,60,48]. Additionally, most papers considered were from journals, meaning that the majority of studies underwent some form of peer review (43/55)…”
Section: Transparencymentioning
confidence: 99%
“…The 9 aforementioned studies did not provide any information on implementation details. Of the remaining 10 studies, 5 provided code, but as previously mentioned did not include detailed walkthroughs of their interpretation pipelines [57,48,58,55,60]. A subsection of studies also specifically underscored that extensive, expert driven preprocessing is not required with deep learning studies, and almost all studies alluded to this fact in their introductions [60,69,79,51,52].…”
Section: Interpretabilitymentioning
confidence: 99%
See 3 more Smart Citations