2022 IEEE International Conference on Big Data (Big Data) 2022
DOI: 10.1109/bigdata55660.2022.10021051
|View full text |Cite
|
Sign up to set email alerts
|

Towards Robust Graph Neural Networks via Adversarial Contrastive Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
37
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(37 citation statements)
references
References 22 publications
0
37
0
Order By: Relevance
“…One of the most popular approaches is adversarial training (AT) [55], which trains the neural network with the worst-case adversarial examples. Although existing contrastive learning literature has shown boosted performance on the standard generalization, its connection with adversarial robustness has not been studied until recently [45,49]. For detailed review please refer [69].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…One of the most popular approaches is adversarial training (AT) [55], which trains the neural network with the worst-case adversarial examples. Although existing contrastive learning literature has shown boosted performance on the standard generalization, its connection with adversarial robustness has not been studied until recently [45,49]. For detailed review please refer [69].…”
Section: Related Workmentioning
confidence: 99%
“…One of the most popular approaches to mitigate the effect of adversarial perturbation is adversarial training (AT) [55], which trains the neural network with worst-case adversarial examples. Very recently, the connection between self-supervised contrastive learning and adversarial training has been built to develop label-efficient and robust models [45,49].…”
Section: Adversarial Robustness Of Hclmentioning
confidence: 99%
See 2 more Smart Citations
“…They use a pseudo-label generation technique to avoid using labels in adversarial training of downstream tasks. Similarly, Jiang et al [12] considered a linear combination of two contrastive loss functions to study the robustness under different pair selection scenarios.…”
Section: Introductionmentioning
confidence: 99%