Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence 2022
DOI: 10.24963/ijcai.2022/471
|View full text |Cite
|
Sign up to set email alerts
|

Understanding the Limits of Poisoning Attacks in Episodic Reinforcement Learning

Abstract: Deep Reinforcement Learning (DRL) has been a promising solution to many complex decision-making problems. Nevertheless, the notorious weakness in generalization among environments prevent widespread application of DRL agents in real-world scenarios. Although advances have been made recently, most prior works assume sufficient online interaction on training environments, which can be costly in practical cases. To this end, we focus on an offline-training-online-adaptation setting, in which the agent first lea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…For example, Tessler, Efroni, and Mannor (2019); Lee et al (2021) considered action attacks. Reward poisoning attacks are the focus of the work by Zhang et al (2020b); Rangi et al (2022). In fact, a combination action and reward attack are devised by Rangi et al (2022).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, Tessler, Efroni, and Mannor (2019); Lee et al (2021) considered action attacks. Reward poisoning attacks are the focus of the work by Zhang et al (2020b); Rangi et al (2022). In fact, a combination action and reward attack are devised by Rangi et al (2022).…”
Section: Related Workmentioning
confidence: 99%
“…Reward poisoning attacks are the focus of the work by Zhang et al (2020b); Rangi et al (2022). In fact, a combination action and reward attack are devised by Rangi et al (2022). Most of these works consider the policy teaching setting, where the attacker's goal is for the victim to follow a fixed policy π † .…”
Section: Related Workmentioning
confidence: 99%
“…Offline Reward Poisoning: Ma et al (2019); Rakhsha et al (2020Rakhsha et al ( , 2021a; Rangi et al (2022b); Zhang and Parkes (2008); Zhang, Parkes, and Chen (2009) focus on adversarial attack on offline single-agent reinforcement learners. Gleave et al (2019); Guo et al (2021) study the poisoning attack on multi-agent reinforcement learners, assuming that the attacker controls one of the learners.…”
Section: Related Workmentioning
confidence: 99%
“…Data poisoning attacks have been well studied in supervised learning (intentionally forcing the learner to train a wrong classifier) and reinforcement learning (wrong policy) (Banihashem et al 2022;Huang and Zhu 2019;Liu and Lai 2021;Rakhsha et al 2021aRakhsha et al ,b, 2020Sun, Huo, and Huang 2020;Zhang et al 2020;Ma et al 2019;Rangi et al 2022;Zhang and Parkes 2008;Zhang, Parkes, and Chen 2009). Can data poisoning attacks be a threat to Markov Games, too?…”
Section: Introductionmentioning
confidence: 99%