2022
DOI: 10.1007/s00521-022-07501-0
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning and multimodal feature fusion for the aided diagnosis of Alzheimer's disease

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(4 citation statements)
references
References 39 publications
0
2
0
Order By: Relevance
“…First , individual differences between different subjects are not fully considered in our scheme, which is an important issue for FBN analysis (Folville et al, 2020 ; Schabdach et al, 2022 ) and an important direction for our future improvements. Besides , since multi-modal data is taking an increasingly important place in brain analysis (Jia and Lao, 2022 ; Zhao et al, 2022 ), the performance of employing one-modal data is limited. Compared with the widely used multi-modal data model (Yu et al, 2021 ), how to adapt the scheme to the multi-modal data type is a key point in our future work.…”
Section: Discussionmentioning
confidence: 99%
“…First , individual differences between different subjects are not fully considered in our scheme, which is an important issue for FBN analysis (Folville et al, 2020 ; Schabdach et al, 2022 ) and an important direction for our future improvements. Besides , since multi-modal data is taking an increasingly important place in brain analysis (Jia and Lao, 2022 ; Zhao et al, 2022 ), the performance of employing one-modal data is limited. Compared with the widely used multi-modal data model (Yu et al, 2021 ), how to adapt the scheme to the multi-modal data type is a key point in our future work.…”
Section: Discussionmentioning
confidence: 99%
“…The first is to model the 4D information by applying the model directly to the time series of fMRI volumes in order to avoid information loss associated with applying CNNs to FC networks or mReHo transformation images, and the second is to overcome the overfitting problem that occurs with traditional CNNs that was replaced with CapsNet. From the comparison results, it has been observed that the proposed model and the studies introduced in [7] and [22], which model the 4D information from the fMRI volume sequence directly, were able to achieve better accuracies than the models introduced in [10], [12], [14], [15] (which apply CNNs to FC networks), and [17] (that apply CNNs to mReHO images). Moreover, despite CapsNet being designed to achieve better generalization than traditional CNNs, the method which applies CNNs with LSTM directly to fMRI volumes [22] still achieves better accuracy than our model.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 96%
“…They applied CNNs to dynamic FC (dFC) networks for extracting the temporal features based on which an LSTM was used to model the sequential information of the dFC networks. To perform the spatiotemporal analysis avoiding the 4D information complexity and model the 4D features at the same time, Jia et al used mReHo transformation [16] that represents the whole time series of volumes as a single volume [17]. In the study, a 3D-PCANet was trained on the mReHo volumes as the classifier model.…”
Section: Related Work and Contributionmentioning
confidence: 99%
“…The number of deaths from Alzheimer's disease in 2020 increased by 15,925 compared to the 5 years before 2023, and 44,729 more deaths were recorded for all dementias, including Alzheimer's disease (Chua, 2023 ). Traditional machine learning (ML) techniques such as pre-processing (Wen et al, 2020 ), feature extraction (Rathore et al, 2017 ), feature selection (Balaji et al, 2023 ), feature fusion (Jia and Lao, 2022 ), and classification (Tanveer et al, 2020 ) have been employed by researchers as a four-step channel in the past few years. Classification is the bottommost step in which each object accredits a label, in either a supervised or unsupervised ML technique (Bondi et al, 2017 ).…”
Section: Introductionmentioning
confidence: 99%