“…Studies on online optimization algorithms robust to adversarial corruptions has been extended to a variety of models, including those for the multi-armed bandit [Lykouris et al, 2018, Gupta et al, 2019, Zimmert and Seldin, 2021, Hajiesmaili et al, 2020, Gaussian process bandits [Bogunovic et al, 2020], Markov decision processes [Lykouris et al, 2019], the problem of prediction with expert advice [Amir et al, 2020], online linear optimization [Li et al, 2019], and linear bandits [Bogunovic et al, 2021, Lee et al, 2021. There can be found the literature on effective attacks to bandit algorithms [Jun et al, 2018, Liu andShroff, 2019] as well.…”