Fourteenth ACM Conference on Recommender Systems 2020
DOI: 10.1145/3383313.3412243
|View full text |Cite
|
Sign up to set email alerts
|

Revisiting Adversarially Learned Injection Attacks Against Recommender Systems

Abstract: Recommender systems play an important role in modern information and e-commerce applications. While increasing research is dedicated to improving the relevance and diversity of the recommendations, the potential risks of state-of-the-art recommendation models are under-explored, that is, these models could be subject to attacks from malicious third parties, through injecting fake user interactions to achieve their purposes. This paper revisits the adversarially-learned injection attack problem, where the injec… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
54
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 51 publications
(54 citation statements)
references
References 35 publications
0
54
0
Order By: Relevance
“…• Rev.Adv. [53] perturbation: This method is the state-of-the-art data poisoning attack that inserts a fake user with interactions crafted via a bi-level optimization problem. To adapt it for our deletion and replacement perturbation settings, we first find the most similar user in the training data to the fake user, and perform deletion or item replacement of the earliest or random interaction of that user, respectively.…”
Section: Baseline Methodsmentioning
confidence: 99%
“…• Rev.Adv. [53] perturbation: This method is the state-of-the-art data poisoning attack that inserts a fake user with interactions crafted via a bi-level optimization problem. To adapt it for our deletion and replacement perturbation settings, we first find the most similar user in the training data to the fake user, and perform deletion or item replacement of the earliest or random interaction of that user, respectively.…”
Section: Baseline Methodsmentioning
confidence: 99%
“…We follow [17,37] to set allowed sequence lengths of ML-1M, Steam and Beauty as {200, 50, 50} respectively, which are also applied as our generated sequence lengths. We follow [39] Table 3. Configurations.…”
Section: Implementation Detailsmentioning
confidence: 99%
“…Then, the black-box recommender is retrained once and tested on each of the target items, we present the average results in Figure 6. We follow [39] to generate profiles equivalent to 1% of the number of users from the original dataset and adopt the same baseline methods in profile pollution.…”
Section: Rq3: Can We Perform Profile Pollution Attacks Using the Extracted Model?mentioning
confidence: 99%
“…They also propose to generate fake user-item interactions based on influence function [15]. Tang et al [43] propose effective transfer-based poisoning attacks against recommender systems, but they mention that their approach is less effective on cold items. Our "item representation attack" is distinct from a "profile injection attack" or "poisoning attack", but both kinds of attacks have similar impacts, namely, pushing items that have been targeted for promotion.…”
Section: Related Work 21 Robustness Of Recommender Systemmentioning
confidence: 99%
“…Our work is part of the long tradition of research devoted to the security and robustness of recommender system algorithms [5,9,15,16,27,29,33,35,43]. Most work, however, focuses on vulnerabilities related to user profiles.…”
mentioning
confidence: 99%