2021
DOI: 10.1109/tmi.2021.3054167
|View full text |Cite
|
Sign up to set email alerts
|

FISTA-Net: Learning a Fast Iterative Shrinkage Thresholding Network for Inverse Problems in Imaging

Abstract: Inverse problems are essential to imaging applications. In this paper, we propose a model-based deep learning network, named FISTA-Net, by combining the merits of interpretability and generality of the model-based Fast Iterative Shrinkage/Thresholding Algorithm (FISTA) and strong regularization and tuning-free advantages of the data-driven neural network. By unfolding the FISTA into a deep network, the architecture of FISTA-Net consists of multiple gradient descent, proximal mapping, and momentum modules in ca… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
62
0
1

Year Published

2022
2022
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 139 publications
(63 citation statements)
references
References 52 publications
0
62
0
1
Order By: Relevance
“…Despite data can be segmented into patches to address this issue, segmented patches will cause the discontinuity of estimation results. Recently, some unfolded architecture of the iterative procedure is developed to form the framework of the deep learning network [27]- [30]. In [27], the architecture of unfolding FISTA is developed to form a deep network, where the parameters are learned through end-to-end training.…”
Section: A Recent Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Despite data can be segmented into patches to address this issue, segmented patches will cause the discontinuity of estimation results. Recently, some unfolded architecture of the iterative procedure is developed to form the framework of the deep learning network [27]- [30]. In [27], the architecture of unfolding FISTA is developed to form a deep network, where the parameters are learned through end-to-end training.…”
Section: A Recent Workmentioning
confidence: 99%
“…Recently, some unfolded architecture of the iterative procedure is developed to form the framework of the deep learning network [27]- [30]. In [27], the architecture of unfolding FISTA is developed to form a deep network, where the parameters are learned through end-to-end training. Subsequently, a new DL-based network architecture named alternating direction method of multipliers (ADMM)-Net was proposed, which was derived from the iterative procedures in the ADMM algorithm [28]- [30].…”
Section: A Recent Workmentioning
confidence: 99%
See 1 more Smart Citation
“…That is also to say, gradient descent module v t computes the numerical solution with an adaptive step-size parameter τ t . As proved in [46], a good rule of thumb is to constrain τ t to be positive. Besides, the τ t should decay smoothly with iterations.…”
Section: ) Gradient Descent Module (V T )mentioning
confidence: 99%
“…As stated in [46], because of the progressively suppressed noise variances, the threshold value λ t should decrease with iterations accordingly. Thus, one defines a negative learnable weight w 2 , λ t can be constrained by (26) as well.…”
Section: ) Gradient Descent Module (V T )mentioning
confidence: 99%