The accurate diagnosis of Alzheimer's disease (AD) and its early stage, i.e., mild cognitive impairment, is essential for timely treatment and possible delay of AD. Fusion of multimodal neuroimaging data, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), has shown its effectiveness for AD diagnosis. The deep polynomial networks (DPN) is a recently proposed deep learning algorithm, which performs well on both large-scale and small-size datasets. In this study, a multimodal stacked DPN (MM-SDPN) algorithm, which MM-SDPN consists of two-stage SDPNs, is proposed to fuse and learn feature representation from multimodal neuroimaging data for AD diagnosis. Specifically speaking, two SDPNs are first used to learn high-level features of MRI and PET, respectively, which are then fed to another SDPN to fuse multimodal neuroimaging information. The proposed MM-SDPN algorithm is applied to the ADNI dataset to conduct both binary classification and multiclass classification tasks. Experimental results indicate that MM-SDPN is superior over the state-of-the-art multimodal feature-learning-based algorithms for AD diagnosis.
Spatial resolution is one of the key parameters of magnetic resonance imaging (MRI). The image super-resolution (SR) technique offers an alternative approach to improve the spatial resolution of MRI due to its simplicity. Convolutional neural networks (CNN)-based SR algorithms have achieved state-of-the-art performance, in which the global residual learning (GRL) strategy is now commonly used due to its effectiveness for learning image details for SR. However, the partial loss of image details usually happens in a very deep network due to the degradation problem. In this work, we propose a novel residual learning-based SR algorithm for MRI, which combines both multi-scale GRL and shallow network block-based local residual learning (LRL). The proposed LRL module works effectively in capturing high-frequency details by learning local residuals. One simulated MRI dataset and two real MRI datasets have been used to evaluate our algorithm. The experimental results show that the proposed SR algorithm achieves superior performance to all of the other compared CNN-based SR algorithms in this work.
Spatial resolution is a critical imaging parameter in magnetic resonance imaging (MRI). The image super-resolution (SR) is an effective and cost efficient alternative technique to improve the spatial resolution of MR images. Over the past several years, the convolutional neural networks (CNN)-based SR methods have achieved state-of-the-art performance. However, CNNs with very deep network structures usually suffer from the problems of degradation and diminishing feature reuse, which add difficulty to network training and degenerate the transmission capability of details for SR. To address these problems, in this work, a progressive wide residual network with a fixed skip connection (named FSCWRN) based SR algorithm is proposed to reconstruct MR images, which combines the global residual learning and the shallow network based local residual learning. The strategy of progressive wide networks is adopted to replace deeper networks, which can partially relax the abovementioned problems, while a fixed skip connection helps provide rich local details at high frequencies from a fixed shallow layer network to subsequent networks. The experimental results on one simulated MR image database and three real MR image databases show the effectiveness of the proposed FSCWRN SR algorithm, which achieves improved reconstruction performance compared with other algorithms.
Normalizing all images in a large data set into a common space is a key step in many clinical and research studies, e.g., for brain development, maturation, and aging. Recently, groupwise registration has been developed for simultaneous alignment of all images without selecting a particular image as template, thus potentially avoiding bias in the registration. However, most conventional groupwise registration methods do not explore the data distribution during the image registration. Thus, their performance could be affected by large inter-subject variations in the data set under registration. To solve this potential issue, we propose to use a graph to model the distribution of all image data sitting on the image manifold, with each node representing an image and each edge representing the geodesic pathway between two nodes (or images). Then, the procedure of warping all images to their population center turns to the dynamic shrinking of the graph nodes along their graph edges until all graph nodes become close to each other. Thus, the topology of image distribution on the image manifold is always preserved during the groupwise registration. More importantly, by modeling the distribution of all images via a graph, we can potentially reduce registration error since every time each image is warped only according to its nearby images with similar structures in the graph. We have evaluated our proposed groupwise registration method on both infant and adult data sets, by also comparing with the conventional group-mean based registration and the ABSORB methods. All experimental results show that our proposed method can achieve better performance in terms of registration accuracy and robustness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.