ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2021
DOI: 10.1109/icassp39728.2021.9415073
|View full text |Cite
|
Sign up to set email alerts
|

Graph Signal Denoising Via Unrolling Networks

Abstract: We propose an interpretable graph neural network framework to denoise single or multiple noisy graph signals. The proposed graph unrolling networks expand algorithm unrolling to the graph domain and provide an interpretation of the architecture design from a signal processing perspective. We unroll an iterative denoising algorithm by mapping each iteration into a single network layer where the feed-forward process is equivalent to iteratively denoising graph signals. We train the graph unrolling networks throu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 17 publications
(24 reference statements)
0
2
0
Order By: Relevance
“…A variety of recent work has demonstrated that robust GNN architectures can be formed via graph propagation layers that mirror the unfolded descent iterations of a graph-regularized energy function (Chen & Eldar, 2021;Liu et al, 2021;Ma et al, 2020;Pan et al, 2021;Yang et al, 2021;Zhang et al, 2020;Zhu et al, 2021;Ahn et al, 2022). In doing so, the node embeddings at each layer can be viewed as increasingly refined approximations of an interpretable energy minimizer, that may be designed, for example, to mitigate GNN oversmooth-ing or perhaps inject robustness to spurious edges.…”
Section: Graph Neural Network From Unfolded Optimizationmentioning
confidence: 99%
“…A variety of recent work has demonstrated that robust GNN architectures can be formed via graph propagation layers that mirror the unfolded descent iterations of a graph-regularized energy function (Chen & Eldar, 2021;Liu et al, 2021;Ma et al, 2020;Pan et al, 2021;Yang et al, 2021;Zhang et al, 2020;Zhu et al, 2021;Ahn et al, 2022). In doing so, the node embeddings at each layer can be viewed as increasingly refined approximations of an interpretable energy minimizer, that may be designed, for example, to mitigate GNN oversmooth-ing or perhaps inject robustness to spurious edges.…”
Section: Graph Neural Network From Unfolded Optimizationmentioning
confidence: 99%
“…Optimization Induced Graph Neural Networks A variety of graph neural network architectures have also been developed from a similar optimization perspective [7,29,31,34,48,49,52,54]. For example, graph attention mechanisms were derived using the iterative reweighted least squares (IRLS) algorithm in [49]; this result is related to self-attention, which can be viewed as graph attention on fully connected graphs [5]; however, it fails to produce the Transformer softmax term or the combined self-attention/feedforward Transformer stack.…”
Section: Related Work and Limitationsmentioning
confidence: 99%