2020
DOI: 10.1109/tsp.2020.3033962
|View full text |Cite
|
Sign up to set email alerts
|

Gated Graph Recurrent Neural Networks

Abstract: Graph processes exhibit a temporal structure determined by the sequence index and and a spatial structure determined by the graph support. To learn from graph processes, an information processing architecture must then be able to exploit both underlying structures. We introduce Graph Recurrent Neural Networks (GRNNs), which achieve this goal by leveraging the hidden Markov model (HMM) together with graph signal processing (GSP). In the GRNN, the number of learnable parameters is independent of the length of th… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
43
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 91 publications
(43 citation statements)
references
References 44 publications
(51 reference statements)
0
43
0
Order By: Relevance
“…A GNN is an information processing architecture built upon the notion of graph filters and the use of nonlinearities [11,27]. In particular, GCNNs [15] and GRNNs [16] exploit the operation of graph convolution [17] and use pointwise nonlinearities, resulting in local architectures that only involve communication with nearby agents, thus respecting the partial information structure. Additionally, they have a naturally distributed implementation, and are also stable and equivariant, helping in transfer learning and scalability [16,19].…”
Section: Graph Neural Networkmentioning
confidence: 99%
See 4 more Smart Citations
“…A GNN is an information processing architecture built upon the notion of graph filters and the use of nonlinearities [11,27]. In particular, GCNNs [15] and GRNNs [16] exploit the operation of graph convolution [17] and use pointwise nonlinearities, resulting in local architectures that only involve communication with nearby agents, thus respecting the partial information structure. Additionally, they have a naturally distributed implementation, and are also stable and equivariant, helping in transfer learning and scalability [16,19].…”
Section: Graph Neural Networkmentioning
confidence: 99%
“…To further increase the descriptive power of the parametrized mapping between the agent states X(t) and the actions taken U(t), we consider GRNNs [16]. GRNNs learn a hidden state Z(t) from the sequence {X(t)} as follows…”
Section: Graph Recurrent Neural Network (Grnns)mentioning
confidence: 99%
See 3 more Smart Citations