2022
DOI: 10.48550/arxiv.2206.09140
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Certified Graph Unlearning

Abstract: Graph-structured data is ubiquitous in practice and often processed using graph neural networks (GNNs). With the adoption of recent laws ensuring the "right to be forgotten", the problem of graph data removal has become of significant importance. To address the problem, we introduce the first known framework for certified graph unlearning of GNNs. In contrast to standard machine unlearning, new analytical and heuristic unlearning challenges arise when dealing with complex graph data. First, three different typ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 22 publications
1
6
0
Order By: Relevance
“…Here, "approximate" refers to the fact that unlearning is not exact (as it would be for completely retrained models) but more akin to the parametrized notion of differential privacy Dwork (2011); Guo et al (2020); Sekhari et al (2021) (see Section 3 for more details). With the adoption of GSTs, we show that our nonlinear framework enables provable data removal (similar results are currently only available for linear models (Guo et al, 2020;Chien et al, 2022)) and provides theoretical unlearning complexity guarantees. These two problems are hard to tackle for deep neural network models like GNNs Xu et al (2019).…”
Section: Introductionsupporting
confidence: 54%
See 4 more Smart Citations
“…Here, "approximate" refers to the fact that unlearning is not exact (as it would be for completely retrained models) but more akin to the parametrized notion of differential privacy Dwork (2011); Guo et al (2020); Sekhari et al (2021) (see Section 3 for more details). With the adoption of GSTs, we show that our nonlinear framework enables provable data removal (similar results are currently only available for linear models (Guo et al, 2020;Chien et al, 2022)) and provides theoretical unlearning complexity guarantees. These two problems are hard to tackle for deep neural network models like GNNs Xu et al (2019).…”
Section: Introductionsupporting
confidence: 54%
“…Only a handful of works have taken the initial steps towards machine unlearning of graphs. proposes a sharding-based method for exact graph unlearning, while Chien et al (2022) introduces approximate graph unlearning methods that come with theoretical (certified) guarantees. However, these works only focus on node classification tasks and are in general not directly applicable to graph classification.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations