“…Szegedy et al [53] first showed that an input image perturbed with changes that are imperceptible to the human eye is capable of biasing convolutional neural networks (CNNs) to produce wrong labels with high confidence. Since then, numerous methods for generating adversarial examples [4,7,15,24,25,30,32,33,35,36,37,51] and defending against adversarial attacks [6,15,16,38,55,60] have been proposed. The important defence method of adversarial training [15,21,25,30] requires generating adversarial examples during training.…”