2018
DOI: 10.1109/jbhi.2017.2655720
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Neuroimaging Feature Learning With Multimodal Stacked Deep Polynomial Networks for Diagnosis of Alzheimer's Disease

Abstract: The accurate diagnosis of Alzheimer's disease (AD) and its early stage, i.e., mild cognitive impairment, is essential for timely treatment and possible delay of AD. Fusion of multimodal neuroimaging data, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), has shown its effectiveness for AD diagnosis. The deep polynomial networks (DPN) is a recently proposed deep learning algorithm, which performs well on both large-scale and small-size datasets. In this study, a multimodal stacked… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
132
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 353 publications
(145 citation statements)
references
References 45 publications
0
132
0
1
Order By: Relevance
“…Discovering connections between cognitive traits and non-invasive biomedical imaging could prove to be important for further understanding the neural underpinnings of cognitive development. Recently deep learning models trained on brain MRI data have shown promising results in the diagnosis of Alzheimer's disease [10], prediction of age [11,12] and classification of overall survival in brain tumor patients [8]. In this work we compared the performance of deep learning techniques with that of classic machine learning techniques trained on hand-crafted features in the prediction of Gf from MRI-based features.…”
Section: Introductionmentioning
confidence: 99%
“…Discovering connections between cognitive traits and non-invasive biomedical imaging could prove to be important for further understanding the neural underpinnings of cognitive development. Recently deep learning models trained on brain MRI data have shown promising results in the diagnosis of Alzheimer's disease [10], prediction of age [11,12] and classification of overall survival in brain tumor patients [8]. In this work we compared the performance of deep learning techniques with that of classic machine learning techniques trained on hand-crafted features in the prediction of Gf from MRI-based features.…”
Section: Introductionmentioning
confidence: 99%
“…[24,25] is an improved algorithm based on canonical correlation analysis (CCA). e existing feature fusion algorithm [12][13][14][15]26] uses the neural network or sparse representation to jointly represent multimodal data, leading to suppress the relationship between multimodal data. CCA (20)(21)(22) can effectively model the relationship between multimodal data, but it cannot deal with the redundant information in the data.…”
Section: Low-rankmentioning
confidence: 99%
“…Using multimodal data to detect Alzheimer's disease has grown up to be a research hotspot. A double-layer polynomial network [12] method is proposed. Firstly, the first-layer polynomial network extracts the high-level semantic features of MRI and PET data, and the second-layer polynomial network is employed to multimodal data fusion.…”
Section: Introductionmentioning
confidence: 99%
“…By reviewing the literatures on diagnosis of Alzheimer's disease, it was concluded that brain data and images with a low sample size and high dimension are one of the most important challenges in this study, and new research can be carried out in this area [12][13][14][15]. Most of the recently used methods are deep learning methods, including deep sparse multi-task learning [16], stacked auto-encoder [17], sparse regression models [18], etc., each attempting to overcome the aforementioned challenges.…”
Section: Related Workmentioning
confidence: 99%