2022
DOI: 10.1109/tkde.2021.3087515
|View full text |Cite
|
Sign up to set email alerts
|

NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data

Abstract: Recent advances in protecting node privacy on graph data and attacking graph neural networks (GNNs) gain much attention. The eye does not bring these two essential tasks together yet. Imagine an adversary can utilize the powerful GNNs to infer users' private labels in a social network. How can we adversarially defend against such privacy attacks while maintaining the utility of perturbed graphs? In this work, we propose a novel research task, adversarial defenses against GNN-based privacy attacks, and present … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 25 publications
0
5
0
Order By: Relevance
“…3) Insusceptible Training: Some other studies [62], [211], [212] have attempted to defend against privacy attacks and reduce the leakage of sensitive information via modifying the training process of GNNs. For example, it is possible to add privacy-preserving regulation items in the loss function during GNN training, or introduce privacy-preserving modules in GNN architectures to reduce privacy leakage [62]. Specifically, consider a defender aiming to defend against a private attribute inference attack F A (v i ).…”
Section: Methods Category Taskmentioning
confidence: 99%
See 1 more Smart Citation
“…3) Insusceptible Training: Some other studies [62], [211], [212] have attempted to defend against privacy attacks and reduce the leakage of sensitive information via modifying the training process of GNNs. For example, it is possible to add privacy-preserving regulation items in the loss function during GNN training, or introduce privacy-preserving modules in GNN architectures to reduce privacy leakage [62]. Specifically, consider a defender aiming to defend against a private attribute inference attack F A (v i ).…”
Section: Methods Category Taskmentioning
confidence: 99%
“…Generally, a private GNN requires that no leakage of private data (e.g., nodes, edges, the graphs themselves, GNN model parameters, and hyper-parameters for GNN training) occurs in its systems [61]. Privacy can also be measured based on the ability of GNNs to defend against privacy attacks and reduce their attack success rates [62]. Research Differences.…”
Section: Trustworthy Gnnsmentioning
confidence: 99%
“…Security concerns have become a focal point of attention with the rapid development and widespread application of neural networks. Designing efficient and broadly applicable adversarial attack and defense [ 30 , 31 ] strategies has emerged as a current research hotspot. Existing attacks can be classified into several categories based on different criteria.…”
Section: Related Workmentioning
confidence: 99%
“…Compared to general deep learning models, GraphML is more vulnerable to privacy risks as they incorporate not only the node features/labels but also the graph structure [20]. Privacy preserving techniques for graph models are mainly based on differential privacy [20,29,30] and adversarial training frameworks [11,17,16].…”
Section: Privacy In Graphmlmentioning
confidence: 99%