2018
DOI: 10.1109/tsp.2018.2791945
|View full text |Cite
|
Sign up to set email alerts
|

Tradeoffs Between Convergence Speed and Reconstruction Accuracy in Inverse Problems

Abstract: Solving inverse problems with iterative algorithms is popular, especially for large data. Due to time constraints, the number of possible iterations is usually limited, potentially affecting the achievable accuracy. Given an error one is willing to tolerate, an important question is whether it is possible to modify the original iterations to obtain faster convergence to a minimizer achieving the allowed error without increasing the computational cost of each iteration considerably. Relying on recent recovery t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
62
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
1
1

Relationship

2
8

Authors

Journals

citations
Cited by 66 publications
(64 citation statements)
references
References 61 publications
1
62
0
Order By: Relevance
“…As suggested in [35], this convergence can be sped up by proposing a learned version of ISTA (LISTA). Furthermore, the authors of [38] demonstrated that the unfolded architecture facilitates a trade-off between fast convergence and reconstruction accuracy of the sparse recovery problem.…”
Section: Discussionmentioning
confidence: 99%
“…As suggested in [35], this convergence can be sped up by proposing a learned version of ISTA (LISTA). Furthermore, the authors of [38] demonstrated that the unfolded architecture facilitates a trade-off between fast convergence and reconstruction accuracy of the sparse recovery problem.…”
Section: Discussionmentioning
confidence: 99%
“…Yet, for sophisticated priors iterative optimization schemes are inevitable, and the regularization parameter has an effect which is similar to the step size in these schemes. In such cases, extremely low value of β inherently results in a massive slowdown in the convergence for convex priors [40], [41] and/or bad local minima for nonconvex priors. Taking a numerical optimization point of view, in the sequel we empirically show thatx BP is superior tox LS even for 2 priors with β → 0, if few iterations of conjugate gradients are used instead of the closed-form expressions (16) and (17).…”
Section: B Performance Analysismentioning
confidence: 99%
“…22 Images labeled with 'difference' indicate the difference between the output image generated from the corresponding SR algorithm and the original image. Similar performance was observed for other MNIST or OMNIGLOT images.We used the GFLSTM network for DNN f θ (·) in TSN and trained the network using Algorithm 1, whose input tuple (k 1 , k 2 , s d , s b , n e , v SNRdB ) was set to (1, 250, 6 · 10 5 , 250, 400,20). Note that DNN f…”
mentioning
confidence: 99%