2021
DOI: 10.48550/arxiv.2111.15521
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Node-Level Differentially Private Graph Neural Networks

Abstract: Graph Neural Networks (GNNs) are a popular technique for modelling graphstructured data that compute node-level representations via aggregation of information from the local neighborhood of each node. However, this aggregation implies increased risk of revealing sensitive information, as a node can participate in the inference for multiple nodes. This implies that standard privacy preserving machine learning techniques, such as differentially private stochastic gradient descent (DP-SGD) -which are designed for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(13 citation statements)
references
References 12 publications
(24 reference statements)
0
13
0
Order By: Relevance
“…Remarkably, we can still obtain competitive performance with SGC Retraining when we require to be as small as 1. In contrast, one needs at least ≥ 5 to unlearn even one node or edge by leveraging state-of-the-art DP-GNNs [23,20] for reasonable performance, albeit our tested datasets are different. This shows the benefit of our certified graph unlearning method as opposed to both retraining from scratch and DP-GNNs.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Remarkably, we can still obtain competitive performance with SGC Retraining when we require to be as small as 1. In contrast, one needs at least ≥ 5 to unlearn even one node or edge by leveraging state-of-the-art DP-GNNs [23,20] for reasonable performance, albeit our tested datasets are different. This shows the benefit of our certified graph unlearning method as opposed to both retraining from scratch and DP-GNNs.…”
Section: Methodsmentioning
confidence: 99%
“…Machine unlearning can therefore be viewed as a means to trade-off between performance and computational cost, with complete retraining and DP on two different ends of the spectrum [2]. Several recent works proposed DP-GNNs [20,21,22,23] -however, even for unlearning one single node or edge, these methods require a high "privacy budget" to learn with sufficient accuracy.…”
Section: Related Workmentioning
confidence: 99%
“…We use GAP's official implementation on GitHub 1 and follow the same experimental setup as reported in the original paper. We do not include other available differentially private GNN approaches as they either: (i) are outperformed by GAP (e.g., [8,44]) or (ii) have different problem settings (e.g., [34,38]) that make them not directly comparable to our method.…”
Section: Baselinesmentioning
confidence: 99%
“…Nevertheless, their approach relies on public graph data and may not be applicable in all situations. Daigavane et al [8] extend the standard DP-SGD algorithm and privacy amplification by subsampling to bounded-degree graph data to achieve node-level DP, but their method fails to provide inference privacy. Finally, Sajadmanesh et al [39] propose GAP, a private GNN learning framework that provides both edge-level and node-level privacy guarantees using the aggregation perturbation approach.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation