“…While there is much existing work addressing adversarial attacks on supervised learning models [Szegedy et al, 2014, Goodfellow et al, 2015, Kurakin et al, 2017, Moosavi-Dezfooli et al, 2017, Wang et al, 2018, Cohen et al, 2019, Dohmatob, 2019, Wang et al, 2019, Carmon et al, 2019, Pinot et al, 2019, Alayrac et al, 2019, Dasgupta et al, 2019, Cicalese et al, 2020, Li et al, 2021, the understanding of adversarial attacks on RL models is less complete. Among the limited existing works on adversarial attacks against RL, they formally or experimentally considers different types of poisoning attack [Huang and Zhu, 2019, Sun et al, 2021, Rakhsha et al, 2020, 2021b. [Sun et al, 2021] discusses the differences between the poisoning attacks.…”