Proceedings of the 13th ACM Conference on Recommender Systems 2019
DOI: 10.1145/3298689.3347031
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial attacks on an oblivious recommender

Abstract: Can machine learning models be easily fooled? Despite the recent surge of interest in learned adversarial attacks in other domains, in the context of recommendation systems this question has mainly been answered using hand-engineered fake user profles. This paper attempts to reduce this gap. We provide a formulation for learning to attack a recommender as a repeated general-sum game between two players, i.e., an adversary and a recommender oblivious to the adversary's existence. We consider the challenging cas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
77
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 82 publications
(77 citation statements)
references
References 20 publications
0
77
0
Order By: Relevance
“…The first part (partial derivative ∂L adv /∂ X ) assumes X is independent to other variables, while the second part suggests θ * can be also a function containing X . Among all existing studies [11,13,14,26], we found the second part in Eq. 4has been completely ignored.…”
Section: Limitations In Existing Studies and Our Contributionsmentioning
confidence: 86%
See 2 more Smart Citations
“…The first part (partial derivative ∂L adv /∂ X ) assumes X is independent to other variables, while the second part suggests θ * can be also a function containing X . Among all existing studies [11,13,14,26], we found the second part in Eq. 4has been completely ignored.…”
Section: Limitations In Existing Studies and Our Contributionsmentioning
confidence: 86%
“…As we can see from Algorithm 1, solving the inner objective for surrogate model training is simple and conventional, while the challenge comes from obtaining the adversarial gradient ∇ X L adv to update fake data. In the literature, existing works either tried to estimate this gradient [11], or tried to directly compute it [13,14,26]. But under the problem formulation in Eqs.…”
Section: Limitations In Existing Studies and Our Contributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Inspired by the success of GAN, a few works turn to leverage GAN for shilling attack task [15,16]. However, directly adopting existing GAN methods for generating adversarial examples, without special designs (like AUSH) to tailor them for RS, will not provide satisfactory results in shilling attacks as shown in our experiments.…”
Section: Shilling Attacks Against Rsmentioning
confidence: 93%
“…Consequently, special designs are required to balance and achieve multiple attack goals simultaneously, while keeping the attack undetectable. Due to the aforementioned challenges, only a few recent works [15,16] consider directly adopting the idea of adversarial attacks for shilling attack, and they do not show satisfactory attack effects on a wide range of RS as illustrated later in our experiments. In addition to these methods, most existing shilling attack methods create injection profiles based on some global statistics, e.g., average rating value [4,9] and rating variance [14] for each item.…”
Section: Introductionmentioning
confidence: 96%