2019
DOI: 10.48550/arxiv.1903.05994
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Can Adversarial Network Attack be Defended?

Jinyin Chen,
Yangyang Wu,
Xiang Lin
et al.

Abstract: Machine learning has been successfully applied to complex network analysis in various areas, and graph neural networks (GNNs) based methods outperform others. Recently, adversarial attack on networks has attracted special attention since carefully crafted adversarial networks with slight perturbations on clean network may invalid lots of network applications, such as node classification, link prediction, and community detection etc. Such attacks are easily constructed with serious security threat to various an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(14 citation statements)
references
References 26 publications
(36 reference statements)
0
14
0
Order By: Relevance
“…Jin and Zhang [8] introduce latent adversarial training for GCN, which train GCN based on the adversarial perturbed output of the first layer. Besides, several studies explored adversarial training based on adversarial perturbed edges for graph data [31,1,28]. Among these works,part of studies pay attention to achieving model's robustness while ignoring the effect of generalization [31,8,1,28,2] and the others simply utilize perturbations on nodal attributes while not explore the effect of perturbation on edges [3,22,5].…”
Section: Related Workmentioning
confidence: 99%
“…Jin and Zhang [8] introduce latent adversarial training for GCN, which train GCN based on the adversarial perturbed output of the first layer. Besides, several studies explored adversarial training based on adversarial perturbed edges for graph data [31,1,28]. Among these works,part of studies pay attention to achieving model's robustness while ignoring the effect of generalization [31,8,1,28,2] and the others simply utilize perturbations on nodal attributes while not explore the effect of perturbation on edges [3,22,5].…”
Section: Related Workmentioning
confidence: 99%
“…Besides adversarial training, Chen et al [8] trained a distillation GCN model by using the output confidence of the initial GCN as a soft label. Based on graph purification defense, [15] performed low-rank approximation on the graph to reduce the impact of NETTACK.…”
Section: Graph Defensementioning
confidence: 99%
“…In various online data, de-anonymization attacks [22], [23] expose users' private information, which leads to the issues of privacy leakage. Therefore, the attacks [4], [6], [9], [12], [57], [60], [63] and possible defenses [8], [13], [15], [16], [29], [41], [45], [62] research on GNNs has become the hot spot.…”
Section: Introductionmentioning
confidence: 99%
“…Other adversarial-based models are fed with adversarial samples during the training process, which helps the model learn to adjust to adversarial samples and thus reduces the negative impacts of those potential attack samples. [73], [54], [23] are the scenarios of the former one, while [10], [67], [21] and [26] are the scenarios of the latter one.…”
Section: Taxonomies Of Defensesmentioning
confidence: 99%
“…In particular, we first introduced some common metrics for both defense and attack, and then introduce some metrics provided in their respective work in three categories: effectiveness, efficiency, and imperceptibility. For instance, the Attack Success Rate (ASR) [9] and the Average Defense Rate (ADR) [10] are proposed to measure the effectiveness of attack and defense, respectively.…”
Section: Introductionmentioning
confidence: 99%