“…Various methods have been proposed over the last few years [6], [7], [8], [9] to evaluate the adversarial robustness of DNNs. One of the most practical methods [4], [10], [11] is to approximate the adversarial risk of the subjected model f as a robustness indicator, i.e., R(f, D, B, ) = E (x,y)∼D max x ∈B(x, ) 0−1 (f (x ), y) , (1) P. Xia, Z. Li, and B. Li are with the Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China (e-mail: {xpengfei, iceli}@mail.ustc.edu.cn, binli@ustc.edu.cn).…”