2022
DOI: 10.48550/arxiv.2203.00949
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation

Abstract: Graph Neural Networks (GNNs) are powerful models designed for graph data that learn node representation by recursively aggregating information from each node's local neighborhood. However, despite their state-of-the-art performance in predictive graph-based applications, recent studies have shown that GNNs can raise significant privacy concerns when graph data contain sensitive information. As a result, in this paper, we study the problem of learning GNNs with Differential Privacy (DP). We propose GAP, a novel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 44 publications
(67 reference statements)
0
7
0
Order By: Relevance
“…Remarkably, we can still obtain competitive performance with SGC Retraining when we require to be as small as 1. In contrast, one needs at least ≥ 5 to unlearn even one node or edge by leveraging state-of-the-art DP-GNNs [23,20] for reasonable performance, albeit our tested datasets are different. This shows the benefit of our certified graph unlearning method as opposed to both retraining from scratch and DP-GNNs.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Remarkably, we can still obtain competitive performance with SGC Retraining when we require to be as small as 1. In contrast, one needs at least ≥ 5 to unlearn even one node or edge by leveraging state-of-the-art DP-GNNs [23,20] for reasonable performance, albeit our tested datasets are different. This shows the benefit of our certified graph unlearning method as opposed to both retraining from scratch and DP-GNNs.…”
Section: Methodsmentioning
confidence: 99%
“…Machine unlearning can therefore be viewed as a means to trade-off between performance and computational cost, with complete retraining and DP on two different ends of the spectrum [2]. Several recent works proposed DP-GNNs [20,21,22,23] -however, even for unlearning one single node or edge, these methods require a high "privacy budget" to learn with sufficient accuracy.…”
Section: Related Workmentioning
confidence: 99%
“…Compared to general deep learning models, GraphML is more vulnerable to privacy risks as they incorporate not only the node features/labels but also the graph structure [20]. Privacy preserving techniques for graph models are mainly based on differential privacy [20,29,30] and adversarial training frameworks [11,17,16].…”
Section: Privacy In Graphmlmentioning
confidence: 99%
“…Moreover, due to the correlated nature of the graph data, privacy-preserving mechanisms on graph models need to focus on several aspects such as node privacy, edge privacy, and attribute privacy [30]. This leads to more complex privacy-preserving mechanisms, which results in a further loss of transparency.…”
Section: Transparency Of Private Modelsmentioning
confidence: 99%
See 1 more Smart Citation