“…The robustness of neural networks to distribution shifts is a broad research area (Hendrycks et al, 2021a). Among the sub-fields of this research area are domain adaptation (Shimodaira, 2000;Wilson & Cook, 2020), out-ofdistribution (OOD) detection (Hendrycks & Gimpel, 2017;Schwinn et al, 2021a), corruption and perturbation robustness (Hendrycks & Dietterich, 2019;Yin et al, 2019;Geirhos et al, 2019;Zhang et al, 2021), robustness to spatial transformations (Engstrom et al, 2019), adversarial robustness (Szegedy et al, 2014;Goodfellow et al, 2015;Madry et al, 2018;Bungert et al, 2021), and more. Here, we focus on robustness against common corruptions and perturbations, spatial transformations, natural adversarial examples, and optimized adversarial examples.…”