2022
DOI: 10.1109/tpami.2021.3073504
|View full text |Cite
|
Sign up to set email alerts
|

Deep Constraint-Based Propagation in Graph Neural Networks

Abstract: The popularity of deep learning techniques renewed the interest in neural architectures able to process complex structures that can be represented using graphs, inspired by Graph Neural Networks (GNNs). We focus our attention on the originally proposed GNN model of Scarselli et al. 2009, which encodes the state of the nodes of the graph by means of an iterative diffusion procedure that, during the learning stage, must be computed at every epoch, until the fixed point of a learnable state transition function is… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 21 publications
0
5
0
Order By: Relevance
“…The delay is zero for h = D since, in this case, the last stage computes the forward, backward (and update) operations related to a given input at the same time. However, it is not only a matter of changing the stage input, since a similar consideration holds for the values of weights, that get updated (thus they change) at each computational step, as we already anticipated in (7). Even if this will play a role in the evaluation of the gradients, a small learning rate can mitigate abrupt changes in the values of the weights, making the resulting approximation more appropriate.…”
Section: Scalable and Parallel Local Computations Over Timementioning
confidence: 97%
“…The delay is zero for h = D since, in this case, the last stage computes the forward, backward (and update) operations related to a given input at the same time. However, it is not only a matter of changing the stage input, since a similar consideration holds for the values of weights, that get updated (thus they change) at each computational step, as we already anticipated in (7). Even if this will play a role in the evaluation of the gradients, a small learning rate can mitigate abrupt changes in the values of the weights, making the resulting approximation more appropriate.…”
Section: Scalable and Parallel Local Computations Over Timementioning
confidence: 97%
“…Sampling-based techniques have been developed for fast and scalable GNN training, such as GraphSAGE (Hamilton, Ying, and Leskovec 2017) and FastGCN (Chen, Ma, and Xiao 2018). Simplifying methods (Tiezzi et al 2021;Wu et al 2019) are also proposed to make the GNN models more easily implementable and more efficient. Mixhop (Abu-El-Haija et al 2019) and Graph Diffusion Convolution(GDC) (Klicpera, Weißenberger, and Günnemann 2019) explored combining feature information from multihop neighborhoods, while PPNP (Klicpera, Bojchevski, and Günnemann 2018) and PPRGo (Bojchevski et al 2020) derived an improved propagation scheme of high-order information based on personalized PageRank.…”
Section: Related Workmentioning
confidence: 99%
“…Ciano et al [34] proposed a mixed inductive-transductive GNN model, study its properties and introduce an experimental strategy that help to understand and distinguish the role of inductive and transductive learning. Tiezzi et al [35] proposed an approach to learning in GNNs based on constrained optimization in the Lagrangian framework. Learning both the transition function and the node states is the outcome of a joint process, in which the state convergence procedure is implicitly expressed by a constraint satisfaction mechanism, avoiding iterative epoch-wise procedures and the network unfolding.…”
Section: Graph Neural Networkmentioning
confidence: 99%