We demonstrate residual channel attention networks (RCAN) for restoring and enhancing volumetric time-lapse (4D) fluorescence microscopy data. First, we modify RCAN to handle image volumes, showing that our network enables denoising competitive with three other state-of-the-art neural networks. We use RCAN to restore noisy 4D super-resolution data, enabling image capture over tens of thousands of images (thousands of volumes) without apparent photobleaching. Second, using simulations we show that RCAN enables class-leading resolution enhancement, superior to other networks. Third, we exploit RCAN for denoising and resolution improvement in confocal microscopy, enabling ~2.5-fold lateral resolution enhancement using stimulated emission depletion (STED) microscopy ground truth. Fourth, we develop methods to improve spatial resolution in structured illumination microscopy using expansion microscopy ground truth, achieving improvements of ~1.4-fold laterally and ~3.4-fold axially. Finally, we characterize the limits of denoising and resolution enhancement, suggesting practical benchmarks for evaluating and further enhancing network performance.data, which we deconvolved to yield high SNR 'ground truth'. We then used 30 of these volumes for training and held out volumes for testing network performance. Using the same training and test data, we compared four networks: RCAN, CARE, SRResNET 20 , and ESRGAN 21 . SRResNet and ESRGAN are both class-leading deep residual networks used in image super-resolution, with ESRGAN winning the 2018 Perceptual Image Restoration and Manipulation challenge on perceptual image super-resolution 22 .For the mEmerald-Tomm20 label, RCAN, CARE, ESRGAN, and SRResNET predictions all provided 88 clear improvements in visual appearance, structural similarity index (SSIM) and peak signal-to-noise-89 ratio (PSNR) metrics relative to the raw input (Fig. 1b), also outperforming direct deconvolution on the noisy input data (Supplementary Fig. 1). The RCAN output provided PSNR and SSIM values competitive with the other networks (Fig. 1b), prompting us to investigate whether this performance held for other organelles. We thus conducted similar experiments for fixed U2OS cells with labeled actin, endoplasmic reticulum (ER), golgi, lysosomes, and microtubules (Supplementary Fig. 2), acquiring 15-23 volumes of training data and training independent networks for each organelle. In almost all cases, RCAN performance met or exceeded the other networks (Supplementary Fig. 3, Supplementary Table 3).An essential consideration when using any deep learning method is understanding when network performance deteriorates. Independently training an ensemble of networks and computing measures of network disagreement can provide insight into this issue 9,16 , yet such measures were not generally predictive of disagreement between ground truth and RCAN output (Supplementary Fig. 4). Instead, we found that estimating the per-pixel SNR in the raw input (Methods, Supplementary Fig. 4) seemed to better correlate with network ...
Purpose: To develop a reproducible and fast method to reconstruct MR fingerprinting arterial spin labeling (MRF-ASL) perfusion maps using deep learning. Method: A fully connected neural network, denoted as DeepMARS, was trained using simulation data and added Gaussian noise. Two MRF-ASL models were used to generate the simulation data, specifically a single-compartment model with 4 unknowns parameters and a two-compartment model with 7 unknown parameters. The DeepMARS method was evaluated using MRF-ASL data from healthy subjects (N = 7) and patients with Moymoya disease (N = 3). Computation time, coefficient of determination (R 2 ), and intraclass correlation coefficient (ICC) were compared between DeepMARS and conventional dictionary matching (DM). The relationship between DeepMARS and Look-Locker PASL was evaluated by a linear mixed model. Results: Computation time per voxel was <0.5 ms for DeepMARS and >4 seconds for DM in the single-compartment model. Compared with DM, the DeepMARS showed higher R 2 and significantly improved ICC for single-compartment derived bolus arrival time (BAT) and two-compartment derived cerebral blood flow (CBF) and higher or similar R 2 /ICC for other parameters. In addition, the DeepMARS was significantly correlated with Look-Locker PASL for BAT (single-compartment) and CBF (two-compartment). Moreover, for Moyamoya patients, the location of diminished CBF and prolonged BAT shown in DeepMARS was consistent with the position of occluded arteries shown in time-of-flight MR angiography. Conclusion: Reconstruction of MRF-ASL with DeepMARS is faster and more reproducible than DM. K E Y W O R D S deep learning, DeepMARS, MRF-ASL, reconstruction, reproducibility | 1025 ZHANG et Al.
The emergence of convolutional neural network (CNN) has greatly promoted the development of hyperspectral image (HSI) classification technology. However, the acquisition of HSI is difficult. Lack of training samples is the primary cause of low classification performance. Traditional CNN-based methods mainly use 2D CNN for feature extraction, which make interband correlations of HSIs underutilized. 3D CNN extracts the joint-spectral-spatial information representation, but it depends on a more complex model. Also, too deep or too shallow network cannot extract the image features well. To tackle these issues, we propose an HSI classification method based on 2D-3D CNN and multi-branch feature fusion. We first combine 2D CNN and 3D CNN to extract image features. Then, by means of the multi-branch neural network, three kinds of features from shallow to deep are extracted and fused in the spectral dimension. Finally, the fused features are passed into several fully connected layers and a softmax layer to obtain the classification results. In addition, our network model utilizes the state-of-the-art activation function Mish to further improve the classification performance. Our experimental results, conducted on four widely used HSI data sets, indicate that the proposed method achieves better performance than the existing alternatives.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.