Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.147
|View full text |Cite
|
Sign up to set email alerts
|

Poisoning Knowledge Graph Embeddings via Relation Inference Patterns

Abstract: We study the problem of generating data poisoning attacks against Knowledge Graph Embedding (KGE) models for the task of link prediction in knowledge graphs. To poison KGE models, we propose to exploit their inductive abilities which are captured through the relationship patterns like symmetry, inversion and composition in the knowledge graph. Specifically, to degrade the model's prediction confidence on target facts, we propose to improve the model's prediction confidence on a set of decoy facts. Thus, we cra… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…Additionally, as in Bhardwaj (2020); Bhardwaj et al (2021), we call for future proposals to defend against the security vulnerabilities of KGE models. Some promising directions might be to use adversarial training techniques or train ensembles of models over subsets of training data to prevent the model predictions being influenced by a few triples only.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, as in Bhardwaj (2020); Bhardwaj et al (2021), we call for future proposals to defend against the security vulnerabilities of KGE models. Some promising directions might be to use adversarial training techniques or train ensembles of models over subsets of training data to prevent the model predictions being influenced by a few triples only.…”
Section: Discussionmentioning
confidence: 99%
“…We also study data poisoning attacks against KGE models in Bhardwaj et al (2021). Here, we exploit the inductive abilities of KGE models to select adversarial additions that improve the predictive performance of the model on a set of decoy triples; which in turn degrades the performance on target triples.…”
Section: Comparison Of Datasetsmentioning
confidence: 99%
“…Similar problems are seen in the VeReMi Extension database, which has been widely utilized to learn ML classifiers to classify misbehaving nodes in VANETs. There are several approaches to solving this issue: changing the machine learning approach, adding an inaccurate categorization fee, and data sampling [20]. In the context of VANETs, the researchers [21] attempt to oversample the minority category by employing the Synthetic Minority Oversampling Technique (SMOTE), an approach to data augmentation for the less category [22] [23] that duplicates instances from the less category, in order to avoid the issue of imbalanced datasets.…”
Section: B Tackling Unbalanced Datamentioning
confidence: 99%