“…From the defenders' side, recently proposed methods for improving the safety of deep learning systems include [2][3][4]9,12,18,19,21,26,27,31,32,34,37,40,43,45,47,50,53,56,57]. Most of these methods fall broadly into the following several classes: (1) adversarial training where the adversarial samples are used for retraining the deep learning systems [3,22,45,48,50]; (2) gradient masking where the deep learning system is designed to have an extremely flat loss function landscape with respect to the perturbations in input samples [4,40]; (3) feature discretization where we simply discretize the features of samples (both benign samples and adversarial samples) before we feed it to the deep learning systems [37,57]; (4) generative model based approach where we find a sample from the distribution of benign samples to approximate an arbitrary given sample, and then use the approximation as input for the deep learning systems [18,21,26,31,32,43,47].…”