2019 IEEE International Conference on Image Processing (ICIP) 2019
DOI: 10.1109/icip.2019.8803281
|View full text |Cite
|
Sign up to set email alerts
|

Designing Recurrent Neural Networks by Unfolding an L1-L1 Minimization Algorithm

Abstract: We propose a new deep recurrent neural network (RNN) architecture for sequential signal reconstruction. Our network is designed by unfolding the iterations of the proximal gradient method that solves the 1 -1 minimization problem. As such, our network leverages by design that signals have a sparse representation and that the difference between consecutive signal representations is also sparse. We evaluate the proposed model in the task of reconstructing video frames from compressive measurements and show that … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
12
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4

Relationship

5
4

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 20 publications
0
12
0
Order By: Relevance
“…By contrast [22] and [23] demonstrate that the sensing matrix training strategy and the deblocking strategy both play a key role in improving the reconstruction quality. We also noted that there are few studies on video reconstruction using unrolling methods, such as those described in [24] and [25].…”
Section: Related Workmentioning
confidence: 93%
“…By contrast [22] and [23] demonstrate that the sensing matrix training strategy and the deblocking strategy both play a key role in improving the reconstruction quality. We also noted that there are few studies on video reconstruction using unrolling methods, such as those described in [24] and [25].…”
Section: Related Workmentioning
confidence: 93%
“…Deep unfolding methods have been achieving state-of-theart performance in solving decomposition problems in terms of both accuracy and computational complexity [13], [14], [15], [16], [6], [7], [8]. The study in [13] proposed to unroll the iterations of the Iterative Shrinkage-Thresholding Algorithm (ISTA) to a feed-forward neural network-coined Learned ISTA (LISTA)-which is trained on data.…”
Section: Introductionmentioning
confidence: 99%
“…Deep unfolding networks are interpretable (as opposed to traditional DNNs) and have proven to be superior to traditional optimizationbased methods and DNN models (because they integrate prior knowledge about the signal structure). Examples of such networks include LISTA [14], 1 -1 -RNN [15], and LeSITA [16]. In a recent study we presented a DeepFPC network [17] designed by unfolding the iterations of the FPC-1 algorithm.…”
Section: Introductionmentioning
confidence: 99%