2020
DOI: 10.1109/tci.2019.2948732
|View full text |Cite
|
Sign up to set email alerts
|

Neumann Networks for Linear Inverse Problems in Imaging

Abstract: Many challenging image processing tasks can be described by an ill-posed linear inverse problem: deblurring, deconvolution, inpainting, compressed sensing, and superresolution all lie in this framework. Traditional inverse problem solvers minimize a cost function consisting of a data-fit term, which measures how well an image matches the observations, and a regularizer, which reflects prior knowledge and promotes images with desirable properties like smoothness. Recent advances in machine learning and image pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
82
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 156 publications
(92 citation statements)
references
References 63 publications
(113 reference statements)
0
82
0
Order By: Relevance
“…To recover multi-modal data, a reconstruction framework is proposed in [42] that uses side information in unrolled optimization. Unrolled optimization approaches using deep learning were proposed in [43,44]. Deep-learning architectures were used to train hyper-parameters, such as a gradient regularizer and a step size.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…To recover multi-modal data, a reconstruction framework is proposed in [42] that uses side information in unrolled optimization. Unrolled optimization approaches using deep learning were proposed in [43,44]. Deep-learning architectures were used to train hyper-parameters, such as a gradient regularizer and a step size.…”
Section: Discussionmentioning
confidence: 99%
“…Compared with CNN, ResCNN shows significant improvement in reconstruction performance and converges faster than CNN. In future work, we will explore compression approaches [40] and unrolled optimization approaches [43,44] for generating a sparsifying basis Φ from the training dataset to fully represent spectra without loss of spectral features.…”
Section: Discussionmentioning
confidence: 99%
“…exponentially with the size of the input images (Gilton et al, 2020). In this paper the training dataset is relatively small since it is time-consuming to hand-label MODIS-VIIRS images and for optically thick aerosols there are not enough events in the observations.…”
Section: Pretrained and Fine-tuned Cnnsmentioning
confidence: 99%
“…For example, neural networks have been employed to (1) approximate computationally demanding radiative transfer models to decrease computation time (Boukabara et al, 2019;Blackwell, 2005;Takenaka et al, 2011), (2) infer tropical cyclone intensity from microwave imagery (Wimmers et al, 2019), (3) infer cloud vertical structures and cirrus or high-altitude cloud optical depths from MODIS imagery (Leinonen et al, 2019;Minnis et al, 2016), and (4) predict the formation of large hailstones from land-based radar imagery (Gagne et al, 2019). Specific to cloud and volcanic ash detection from radiometer images, Bayesian inference has been employed where the posterior distribution functions were empirically generated using hand-labeled (Pavolonis et al, 2015) or coincident Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) observations (Heidinger et al, 2016(Heidinger et al, , 2012 or from a scientific product (Merchant et al, 2005).…”
Section: Introductionmentioning
confidence: 99%
“…a million labeled images) to estimate the parameters of the CNN (i.e. the convolutional filters and bias terms), since the number of parameters required to accurately 275 identify different image types increases exponentially with the size of the input images (Gilton et al, 2020). In this paper the training dataset is relatively small since it is time-consuming to hand-label MODIS/VIIRS images and for optically thick aerosol there are not enough events in the observations.…”
Section: Pre-trained and Fine-tuned Cnnsmentioning
confidence: 99%