Graph neural networks (GNNs) have shown significant performance in various practical applications due to their strong learning capabilities. Backdoor attacks are a type of attack that can produce hidden attacks on machine learning models. GNNs take backdoor datasets as input to produce an adversary-specified output on poisoned data but perform normally on clean data, which can have grave implications for applications. Backdoor attacks are under-researched in the graph domain, and almost existing graph backdoor attacks focus on the graph-level classification task. To close this gap, we propose a novel graph backdoor attack that uses node features as triggers and does not need knowledge of the GNNs parameters. In the experiments, we find that feature triggers can destroy the feature spaces of the original datasets, resulting in GNNs inability to identify poisoned data and clean data well. An adaptive method is proposed to improve the performance of the backdoor model by adjusting the graph structure. We conducted extensive experiments to validate the effectiveness of our model on three benchmark datasets.
In recent years, Graph Neural Networks (GNNs) have achieved excellent applications in classification or prediction tasks. Recent studies have demonstrated that GNNs are vulnerable to adversarial attacks. Graph Modification Attack (GMA) and Graph Injection Attack (GIA) are commonly attack strategies. Most graph adversarial attack methods are based on GMA, which has a clear drawback: the attacker needs high privileges to modify the original graph, making it difficult to execute in practice. GIA can perform attacks without modifying the original graph. However, many GIA models fail to take care of attack invisibility, i.e., fake nodes can be easily distinguished from the original nodes. To solve the above issue, we propose an imperceptible graph injection attack, named IMGIA. Specifically, IMGIA uses the normal distribution sampling and mask learning to generate fake node features and links respectively, and then uses the homophily unnoticeability constraint to improve the camouflage of the attack. Our extensive experiments on three benchmark datasets demonstrate that IMGIA performs better than the existing state-of-the-art GIA methods. As an example, IMGIA shows an improvement in performance with an average increase in effectiveness of 2%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.