Recommender Systems Handbook 2021
DOI: 10.1007/978-1-0716-2197-4_9
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Recommender Systems: Attack, Defense, and Advances

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 72 publications
0
2
0
Order By: Relevance
“…These ML-based attacks can be divided into two classes: evasion attack and poisoning attack, based on attack timing [53,314]. In this part, we introduce two types of ML-based attacks and Adversarial Machine Learning (AML) methods to defend against these attacks.…”
Section: Machine-learned Adversarial Attacksmentioning
confidence: 99%
See 1 more Smart Citation
“…These ML-based attacks can be divided into two classes: evasion attack and poisoning attack, based on attack timing [53,314]. In this part, we introduce two types of ML-based attacks and Adversarial Machine Learning (AML) methods to defend against these attacks.…”
Section: Machine-learned Adversarial Attacksmentioning
confidence: 99%
“…In recommendation tasks, most attackers aim to adjust the representations of users or items to add noise [314]. For example, He et al [316] introduced a method to add perturbations to parameters of embedding layers based on the Fast Gradient Sign Method (FGSM) [317], which has been widely applied to generate adversarial perturbations.…”
Section: Evasion Attackmentioning
confidence: 99%