2021
DOI: 10.1038/s41592-021-01225-0
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcing neuron extraction and spike inference in calcium imaging using deep self-supervised denoising

Abstract: Calcium imaging is inherently susceptible to detection noise especially when imaging with high frame rate or under low excitation dosage. We developed DeepCAD, a selfsupervised learning method for spatiotemporal enhancement of calcium imaging without requiring any high signal-to-noise ratio (SNR) observations. Using this method, detection noise can be effectively suppressed and the imaging SNR can be improved more than tenfold, which massively improves the accuracy of neuron extraction and spike inference and … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
88
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 82 publications
(88 citation statements)
references
References 35 publications
0
88
0
Order By: Relevance
“…Moreover, we found that the combination of model simplification and data augmentation eliminates overfitting (Supplementary Fig. 6), which was an inherent problem of self-supervised training and required human inspections for model selection previously 32 . We compared DeepCAD-RT with DeepInterpolation, another recently developed denoising method leveraging inter-frame correlations 31 .…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Moreover, we found that the combination of model simplification and data augmentation eliminates overfitting (Supplementary Fig. 6), which was an inherent problem of self-supervised training and required human inspections for model selection previously 32 . We compared DeepCAD-RT with DeepInterpolation, another recently developed denoising method leveraging inter-frame correlations 31 .…”
Section: Resultsmentioning
confidence: 99%
“…To fully exploit spatiotemporal correlations in fluorescence imaging data, all operations inside the network were implemented in 3D, including convolutions, max-poolings, and interpolations (Supplementary Figure 14). Compared to our previous architecture 32 , the number of feature maps in each convolutional layer was reduced by 4-fold and the total number of trainable parameters was reduced by 16-fold (1,020,337 compared with 16,315,585), which massively improved the training and inference speed and reduced the memory consumption. For pre-processing, each input stack was subtracted by the average of the whole stack to handle the intensity variation across different samples and imaging platforms.…”
Section: Network Architecture Training and Inference The Network Arch...mentioning
confidence: 96%
See 1 more Smart Citation
“…In the computer vision literature, related to natural images, the usefulness of pre-training has been widely explored for several tasks. Namely, the coloring of a grayscale image [50,51,52], the restoration of a distorted or deteriorated image [53,54,55,56], the prediction of the transformation performed in an image [57] or even, the re-ordering of pieces or frames of images [58,59] and videos [60]. However, there is hardly any work applying this methodology to microscopy images.…”
Section: Related Workmentioning
confidence: 99%
“…Beyond simply denoising 2D images, many of the implementations described here have the capabilities of working on 3D dataset or even concomitant denoising of multiple channels, (see Table 1 for details). DeepCAD, a recent denoising implementation based on 3D U-Net ( Çiçek et al, 2016 ), efficiently improves the SNR of time-course calcium imaging ( Li et al, 2021 ). Here, using additional information from the context of the pixels in any relevant dimensions (3D, time or other channels for instance) to denoise often greatly improves denoising performance but at the expense of longer training times and the need for larger training datasets.…”
Section: Denoising Tools Using Deep Learningmentioning
confidence: 99%