“…The diffusion and the wide use of deep learning methods for artificial intelligence systems, thus, pose significant security and privacy issues. From the security point of view, Adversarial Attacks (AA) showed that deep learning models can be easily fooled [ 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 ] while, from a privacy point of view, it has been shown that information can be easily extracted from dataset and learned model [ 26 , 27 , 28 ]. It has also been shown that attacking methods based on adversarial samples can be used for privacy-preserving purposes [ 29 , 30 , 31 , 32 , 33 ]: in this case, data are intentionally modified to avoid unauthorized information extraction by fooling the unauthorized software.…”