2020
DOI: 10.1038/s41598-020-62484-z
|View full text |Cite
|
Sign up to set email alerts
|

Multi-resolution convolutional neural networks for inverse problems

Abstract: Inverse problems in image processing, phase imaging, and computer vision often share the same structure of mapping input image(s) to output image(s) but are usually solved by different applicationspecific algorithms. Deep convolutional neural networks have shown great potential for highly variable tasks across many image-based domains, but sometimes can be challenging to train due to their internal non-linearity. We propose a novel, fast-converging neural network architecture capable of solving generic image(s… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2

Relationship

2
6

Authors

Journals

citations
Cited by 32 publications
(20 citation statements)
references
References 47 publications
0
16
0
Order By: Relevance
“…We also tried PPN2V (Prakash et al, 2020) but did not get a satisfying result, as it is challenging to get a good enough parametric noise model estimation. In this benchmark, we used the semi-supervised Multiscale Convolutional Neural Network (MCNN) method (Wang et al, 2020) as the baseline. When testing with datasets acquired from 150 fps with 128 × 128 pixels to 5 fps with 1024 × 1024 pixels, as is shown in Fig.2, Noise2Atom gives visually clear (Gaussian-like) and consistent (high CSS score) results.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We also tried PPN2V (Prakash et al, 2020) but did not get a satisfying result, as it is challenging to get a good enough parametric noise model estimation. In this benchmark, we used the semi-supervised Multiscale Convolutional Neural Network (MCNN) method (Wang et al, 2020) as the baseline. When testing with datasets acquired from 150 fps with 128 × 128 pixels to 5 fps with 1024 × 1024 pixels, as is shown in Fig.2, Noise2Atom gives visually clear (Gaussian-like) and consistent (high CSS score) results.…”
Section: Resultsmentioning
confidence: 99%
“…Therefore it is not feasible to denoise these images directly with supervised models. Furthermore, because of its inner complexity in data degradation, i.e., a simple additive white noise model does not comply (Wang et al, 2020), noise model-based approaches are difficult.…”
Section: Introductionmentioning
confidence: 99%
“…We also tried PPN2V [22] but did not get a satisfying result, as it is challenging to get a good enough parametric noise model estimation. In this benchmark, we used the semi-supervised Multi-scale Convolutional Neural Network (MCNN) method [25] as the baseline.…”
Section: Resultsmentioning
confidence: 99%
“…Therefore it is not feasible to denoise these images directly with supervised models. Furthermore, because of its inner complexity in data degradation, i.e., a simple additive white noise model does not comply [25], noise model-based approaches are difficult.…”
Section: Introductionmentioning
confidence: 99%
“…For example, it is used as an important phenomenon to pursue fundamentally different learning trajectories of meta-learning [22] and provides an understanding of why increasing the depth of a neural network may accelerate the training [33]. The F-Principle also provides important theoretical insights to design DNN-based algorithms [2,3,5,16,17,20,27,28]. For example, Blind et al [3] designs a loss function with explicit higher priority for high frequencies to significantly accelerate the simulation of fluid dynamics through DNN approach; MscaleDNN [16,17,28] is developed to accelerate the fitting of high frequency functions by shifting or rescaling high frequencies to lower ones.…”
Section: Introductionmentioning
confidence: 99%