Purpose: To perform automatic assessment of dementia severity using a deep learning framework applied to resting-state functional magnetic resonance imaging (rs-fMRI) data.Method: We divided 133 Alzheimer’s disease (AD) patients with clinical dementia rating (CDR) scores from 0.5 to 3 into two groups based on dementia severity; the groups with very mild/mild (CDR: 0.5–1) and moderate to severe (CDR: 2–3) dementia consisted of 77 and 56 subjects, respectively. We used rs-fMRI to extract functional connectivity features, calculated using independent component analysis (ICA), and performed automated severity classification with three-dimensional convolutional neural networks (3D-CNNs) based on deep learning.Results: The mean balanced classification accuracy was 0.923 ± 0.042 (p < 0.001) with a specificity of 0.946 ± 0.019 and sensitivity of 0.896 ± 0.077. The rs-fMRI data indicated that the medial frontal, sensorimotor, executive control, dorsal attention, and visual related networks mainly correlated with dementia severity.Conclusions: Our CDR-based novel classification using rs-fMRI is an acceptable objective severity indicator. In the absence of trained neuropsychologists, dementia severity can be objectively and accurately classified using a 3D-deep learning framework with rs-fMRI independent components.
This paper reviews the second AIM learned ISP challenge and provides the description of the proposed solutions and results. The participating teams were solving a real-world RAW-to-RGB mapping problem, where to goal was to map the original low-quality RAW images captured by the Huawei P20 device to the same photos obtained with the Canon 5D DSLR camera. The considered task embraced a number of complex computer vision subtasks, such as image demosaicing, denoising, white balancing, color and contrast correction, demoireing, etc. The target metric used in this challenge combined fidelity scores (PSNR and SSIM) with solutions' perceptual results measured in a user study. The proposed solutions significantly improved the baseline results, defining the state-of-the-art for practical image signal processing pipeline modeling. * A. Ignatov and R. Timofte ({andrey,radu.timofte}@vision.ee.ethz.ch, ETH Zurich) are the challenge organizers, while the other authors participated in the challenge. The Appendix A contains the authors' teams and affiliations. AIM 2020 webpage: https://data.vision.ee.ethz.ch/cvl/aim20/
The analysis of fundus photograph is one of useful diagnosis tools for diverse retinal diseases such as diabetic retinopathy and hypertensive retinopathy. Specifically, the morphology of retinal vessels in patients is used as a measure of classification in retinal diseases and the automatic processing of fundus image has been investigated widely for diagnostic efficiency. The automatic segmentation of retinal vessels is essential and needs to precede computer-aided diagnosis system. In this study, we propose the method which implements patch-based pixel-wise segmentation with convolutional neural networks (CNNs) in fundus images for automatic retinal vessel segmentation. We construct the network composed of several modules which include convolutional layers and upsampling layers. Feature maps are made by modules and concatenated into a single feature map to capture coarse and fine structures of vessel simultaneously. The concatenated feature map is followed by a convolutional layer for performing a pixel-wise prediction. The performance of the proposed method is measured on DRIVE dataset. We show that our method is comparable to the results of other state-of-the-art algorithms.
Reconstructing RGB image from RAW data obtained with a mobile device is related to a number of image signal processing (ISP) tasks, such as demosaicing, denoising, etc. Deep neural networks have shown promising results over hand-crafted ISP algorithms on solving these tasks separately, or even replacing the whole reconstruction process with one model. Here, we propose PyNET-CA, an end-to-end mobile ISP deep learning algorithm for RAW to RGB reconstruction. The model enhances PyNET, a recently proposed state-of-the-art model for mobile ISP, and improve its performance with channel attention and subpixel reconstruction module. We demonstrate the performance of the proposed method with comparative experiments and results from the AIM 2020 learned smartphone ISP challenge. The source code of our implementation is available at https://github.com/egyptdj/skyb-aim2020-public
Background Chest X‐ray (CXR) images are commonly used to show the internal structure of the human body without invasive intervention. The quality of CXR is an important factor as it affects the accuracy of a clinical diagnosis. Unfortunately, it is difficult to always get good quality CXR scans due to noises and scatters. Purpose Recently, wavelet directional CycleGAN (WavCycleGAN) has shown promising results in image restoration tasks by removing noise and artifacts without sacrificing high‐frequency components of the input image. Unfortunately, WavCycleGAN directly reconstructs wavelet directional images that require a wavelet transform in both the training and test phases, resulting in additional processing steps and unnatural artifacts originating from the wavelet domain image. In addition, WavCycleGAN can only process artifact‐related subbands, so it is difficult to apply WavCycleGAN when different levels of artifacts are present in all subbands. To address this, here we present a novel unsupervised CXR image restoration scheme with similar or even better artifact removal performance than WavCycleGAN in spite of wavelet transform being only applied in the training phase. Methods We introduce a novel wavelet subband discriminator which can be combined with CycleGAN or switchable CycleGAN, where wavelet transform is applied only in the training phase for discriminators to match the distribution of wavelet subband components. In our framework, the image restoration network can be still applied in the image domain to prevent unnatural artifacts of the wavelet domain image with the help of the image‐domain cycle‐consistency loss. In addition, using wavelet subband discriminator makes it possible to remove artifacts in all subbands by utilizing frequency‐specific wavelet subband discriminators. Results Through extensive experiments for noise and scatter removal in CXRs, we confirm that our method provides competitive performance compared to existing approaches without additional processing steps in the test phase. Furthermore, we show that our wavelet subband discriminator combined with the switchable CycleGAN can provide the flexibility by generating different levels of artifact removal. Conclusions The proposed wavelet subband discriminator can be combined with the existing CycleGAN or switchable CycleGAN structures to construct an efficient unsupervised CXR image reconstruction. The advantage of our wavelet subband discriminator‐based CXR image restoration is that, unlike traditional WavCycleGAN, it does not require any additional processing steps in the testing phase and does not generate unnatural artifacts originating from the wavelet domain image. We believe that our wavelet subband discriminator can be applied to various CXR image applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.