2019
DOI: 10.1109/tip.2019.2913536
|View full text |Cite
|
Sign up to set email alerts
|

Deep Proximal Unrolling: Algorithmic Framework, Convergence Analysis and Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 56 publications
(21 citation statements)
references
References 40 publications
0
21
0
Order By: Relevance
“…Algorithm unfolding [11,[24][25][26] refers to the general notion of building a problem-specific neural architecture with layers inspired by an iterative solution to the same problem. In this paper, we develop and evaluate a way to unfold the classical WMMSE [5] method using graph neural networks for power allocation in wireless networks.…”
Section: Discussionmentioning
confidence: 99%
“…Algorithm unfolding [11,[24][25][26] refers to the general notion of building a problem-specific neural architecture with layers inspired by an iterative solution to the same problem. In this paper, we develop and evaluate a way to unfold the classical WMMSE [5] method using graph neural networks for power allocation in wireless networks.…”
Section: Discussionmentioning
confidence: 99%
“…This is a key advantage to unrolled networks, as their origins have well‐accepted and well‐defined optimisation models with guaranteed convergence. However, theoretical convergence may no longer be valid due to the dynamic nature of network parameters 53 . Implementation of unrolled networks also typically necessitates the evaluation of signal operators such as the Fourier transform, both in the training and execution phases.…”
Section: Compressed Sensingmentioning
confidence: 99%
“…However, theoretical convergence may no longer be valid due to the dynamic nature of network parameters. 53 Implementation of unrolled networks also typically necessitates the evaluation of signal operators such as the Fourier transform, both in the training and execution phases. This can significantly increase processing time and memory requirements, particularly when fast GPU implementations are not available.…”
Section: Compressed Sensingmentioning
confidence: 99%
“…They proposed learned ISTA (LISTA), based on unrolling the update steps of ISTA [14] into layers of a feedforward neural network. Subsequent studies have demonstrated the efficacy of this class of modelbased deep learning architectures [33] in compressed sensing and sparse-recovery applications [32], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43]. Deep-unfolding combines the advantages of both data-driven and iterative techniques.…”
Section: A Prior Artmentioning
confidence: 99%