“…In general, data poisoning attacks perturb training data to intentionally cause some malfunctions of the target model [Biggio and Roli, 2018, Goldblum et al, 2020, Schwarzschild et al, 2021. A common class of poisoning attacks aim to cause test-time error on some given samples [Koh and Liang, 2017, Muñoz-González et al, 2017, Chen et al, 2017, Koh et al, 2018, Shafahi et al, 2018 or on all unseen samples [Biggio et al, 2012, Feng et al, 2019, Liu and Shroff, 2019, Shen et al, 2019, Huang et al, 2021, Yuan and Wu, 2021, Fowl et al, 2021a. The latter attacks are also known as indiscriminate poisoning attacks as they do not have specific target examples [Barreno et al, 2010].…”