Modern deep neural networks are highly vulnerable to adversarial examples, which attracts more and more researchers' attention to craft powerful adversarial examples. Most of these generation algorithms create global perturbations that would affect the visual quality of adversarial examples. To mitigate such drawbacks, some attacks attempt to generate local perturbations. However, existing local adversarial attacks are time‐consuming and the generated adversarial examples are still distinguishable from clean images. In this paper, we propose a novel efficient local adversarial attack (ELAA) using model interpreters to generate severe local perturbations and improve the imperceptibly of the generated adversarial examples. Specifically, we take advantage of model interpretation methods to search the discriminative regions of clean images. Then, we generate local adversarial examples by adding masks to original clean images. We also propose a new optimization method to reduce the redundancy of local perturbations. Through extensive experiments, we show our ELAA can maintain a high attack ability while preserving the visual quality of clean images. Experimental results also demonstrate our local attack outperforms state‐of‐the‐art local attack methods under various system settings.
Nowadays, several image-based smart services have been widely used in our daily lives, generating many digital images. Since smart devices outsource digital images to the cloud, researchers prefer to select some desired targets from the massive images within the cloud for analysis and improve smart services. Therefore, protective image retrieval on the cloud has attained maximum concentration for privacy-preserving purposes, and the availability assurance of images on the cloud is also a crucial link. Ensuring image security and availability in the cloud environment and precisely preserving retrieval accuracy is comes as a utilitysecurity dilemma while few existing works have explicitly addressed it. Therefore, this paper proposes privacy-preserving image retrieval in the distributed environment based on the combination of image encryption for similarity search and secret image sharing. On the basis of them, we define two-stage encryption. The first-stage encryption algorithm is introduced by modifying Wolfram's reversible cellular automata-based image encryption, which can create a set of processing images to ensure image security and retrieval accuracy. Then, the second-stage encryption algorithm is put forward based on secret image sharing to improve image security and availability. The color histogram could be extracted from the encrypted images for similarity retrieval, and the shadows could be
While gradient aggregation playing a vital role in federated or collaborative learning, recent studies have revealed that gradient aggregation may suffer from some attacks, such as gradient inversion, where the private training data can be recovered from the shared gradients. However, the performance of the existing attack methods is limited because they usually require prior knowledge in Batch Normalization and could only reconstruct a single image or a small batch one. To make the attacks less restrictive and more applicable, we propose an effective and practical gradient inversion method in this paper. Specifically, we use cosine similarity to measure the difference of gradients between the synthesized and ground-truth images, and then construct an input regularization for the fully connected layer to ensure the fidelity of the image. Moreover, we apply the total variation denoising strategy to the convolution feature map for further improving the smoothness of the reconstructed image. Experimental results demonstrate that our method can reconstruct high fidelity training data on a large batch size for complex data sets, such as ImageNet.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.