2020
DOI: 10.1016/j.media.2019.101625
|View full text |Cite
|
Sign up to set email alerts
|

Multi-modal neuroimaging feature selection with consistent metric constraint for diagnosis of Alzheimer's disease

Abstract: The accurate diagnosis of Alzheimer's disease (AD) and its early stage, e.g., mild cognitive impairment (MCI), is essential for timely treatment or possible intervention to slow down AD progression. Recent studies have demonstrated that multiple neuroimaging and biological measures contain complementary information for diagnosis and prognosis. Therefore, information fusion strategies with multi-modal neuroimaging data, such as voxel-based measures extracted from structural MRI (VBM-MRI) and fluorodeoxyglucose … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 120 publications
(59 citation statements)
references
References 61 publications
0
39
0
2
Order By: Relevance
“…Previous studies for multimodal data fusion can be divided into two categories, data-level fusion (focus on how to combine data from different modalities) and decision-level fusion (focus on ensembling classifiers). Deep neural network architectures allow a third form of multimodal fusion, i.e., the intermediate fusion of learned representations, offering a truly flexible approach to multimodal fusion (Hao et al, 2020 ). As deep-learning architectures learn a hierarchical representation of underlying data across its hidden layers, learned representations between different modalities can be fused at various levels of abstraction.…”
Section: Discussion and Future Directionmentioning
confidence: 99%
“…Previous studies for multimodal data fusion can be divided into two categories, data-level fusion (focus on how to combine data from different modalities) and decision-level fusion (focus on ensembling classifiers). Deep neural network architectures allow a third form of multimodal fusion, i.e., the intermediate fusion of learned representations, offering a truly flexible approach to multimodal fusion (Hao et al, 2020 ). As deep-learning architectures learn a hierarchical representation of underlying data across its hidden layers, learned representations between different modalities can be fused at various levels of abstraction.…”
Section: Discussion and Future Directionmentioning
confidence: 99%
“…Interestingly, Wee et al ( 2019 ) constructed cortical thickness graphs using sMRI data and input them into the popular graph CNN. sMRI is one of the common neuroimaging tool for disease diagnosis; however, there are many studies illustrating that multi-modality data are more effective than single-modality data for EMCI classification (Amoroso et al, 2018 ; Cheng et al, 2019 ; Forouzannezhad et al, 2020 ; Hao et al, 2020 ; Lei et al, 2020 ), and these studies have shown that different neuroimaging data may provide complementary information that is beneficial to diagnose EMCI. In addition, more and more researchers have turned their attention away from structural changes of the brain to functional change.…”
Section: Discussionmentioning
confidence: 99%
“…There is no conflict of interest. Additional Files ADdata FDG.csv This csv file contains the pre-processed FDG features from [16]. Label 1, 3, 4, 5 correspond to NC, E-MCI, L-MCI and AD subjects respectively.…”
Section: Sinkhorn Distancementioning
confidence: 99%
“…The feature extraction process includes image registration, region of interests selection, and feature quantification. We specifically use the morphometry features extracted from VBM-MRI and FDG-PET images previously extracted by previous study [16] and denote the two classes of features as VBM and FDG features. The details of feature extraction can be found in [16].…”
Section: Data Collection and Preprocessingmentioning
confidence: 99%