2021
DOI: 10.1016/j.laa.2021.09.004
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional proximal neural networks and Plug-and-Play algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 40 publications
(31 citation statements)
references
References 48 publications
0
31
0
Order By: Relevance
“…Then, we set the reconstruction to be x = x 100 . It was shown in [3,15,19,33,35,51] that Plug-and-Play methods can achieve state-of-the-art performance for several applications.…”
Section: Comparison With Established Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Then, we set the reconstruction to be x = x 100 . It was shown in [3,15,19,33,35,51] that Plug-and-Play methods can achieve state-of-the-art performance for several applications.…”
Section: Comparison With Established Methodsmentioning
confidence: 99%
“…Plug-and-Play methods were used for several applications with excellent performance, see e.g. [3,15,19,33,35,51]. Closely related to plug-andplay methods are regularizing by denoising (RED) [37], variational networks [13] and total deep variation [26].…”
Section: Introductionmentioning
confidence: 99%
“…They are also more versatile as the CNN denoisers can be used for different kinds of inverse problems without the need for retraining. The difficulty in employing these learning-based iterative schemes is that the Lipschitz constant of the CNNs must be controlled in order to ensure their convergence [46], [47], which is not straightforward and remains an active area of research [47]- [49].…”
Section: Deep Learning Based Methodsmentioning
confidence: 99%
“…radio images of high resolution and dynamic range) requiring a large number of iterations to reach convergence. Several methods have been proposed to ensure the firm nonexpansiveness of the denoiser (Ryu et al 2019;Terris et al 2020;Hertrich et al 2021), yet often coming at the cost of either strong architectural constraints, or inaccurate control of the nonexpansiveness. In our recent work Pesquet et al (2021), we proposed to augment the denoiser's training loss with a firm nonexpansiveness term, in order to meet the PnP convergence conditions with no restrictions on the DNN architecture.…”
Section: Proposed Hybrid Plug-and-play Frameworkmentioning
confidence: 99%
“…Several works have recently focused on restoring the convergence of PnP algorithms using DNN denoisers. The vast majority of these works ensure a nonexpansiveness constraint on the denoiser (Romano et al 2017;Ryu et al 2019;Cohen et al 2021;Hertrich et al 2021), but this constraint often comes with either restrictive assumptions on the algorithm, on the DNN architecture or on the operator ∇ 𝑓 .…”
Section: Pnp-fbmentioning
confidence: 99%