Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security 2019
DOI: 10.1145/3319535.3354222
|View full text |Cite
|
Sign up to set email alerts
|

AdVersarial

Abstract: Perceptual ad-blocking is a novel approach that detects online advertisements based on their visual content. Compared to traditional filter lists, the use of perceptual signals is believed to be less prone to an arms race with web publishers and ad networks. We demonstrate that this may not be the case. We describe attacks on multiple perceptual ad-blocking techniques, and unveil a new arms race that likely disfavors ad-blockers. Unexpectedly, perceptual ad-blocking can also introduce new vulnerabilities that … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
57
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 88 publications
(94 citation statements)
references
References 51 publications
0
57
0
Order By: Relevance
“…Research is ongoing in improving the stability and convergence of GANs (e.g., Denton, Chintala, Szlam, & Fergus, ; Salimans et al, ). Another related topic is to make deep neural networks robust against minor perturbations, often through transforming the training images (e.g., Guo, Rana, Cisse, & van der Maaten, ; Papernot & McDaniel, ; Tramèr, Kurakin, Papernot, Boneh, & McDaniel, ). Detailed discussion on the two related topics is beyond the scope of this survey paper.…”
Section: Discussionmentioning
confidence: 99%
“…Research is ongoing in improving the stability and convergence of GANs (e.g., Denton, Chintala, Szlam, & Fergus, ; Salimans et al, ). Another related topic is to make deep neural networks robust against minor perturbations, often through transforming the training images (e.g., Guo, Rana, Cisse, & van der Maaten, ; Papernot & McDaniel, ; Tramèr, Kurakin, Papernot, Boneh, & McDaniel, ). Detailed discussion on the two related topics is beyond the scope of this survey paper.…”
Section: Discussionmentioning
confidence: 99%
“…Recently, Xiao et al (2018) and Tramèr & Boneh (2017) observed independently that it is possible to use various spatial transformations to construct adversarial examples for naturally and ∞ -adversarially trained models. The main difference from our work is that we show even very simple transformations (translations and rotations) are sufficient to break a variety of classifiers, while the transformations employed in (Xiao et al, 2018) and (Tramèr & Boneh, 2017) are more involved. The transformation in (Xiao et al, 2018) is based on performing a displacement of individual pixels in the original image constrained to be globally smooth and then optimized for misclassification probability.…”
Section: Related Workmentioning
confidence: 99%
“…The transformation in (Xiao et al, 2018) is based on performing a displacement of individual pixels in the original image constrained to be globally smooth and then optimized for misclassification probability. Tramèr & Boneh (2017) consider an ∞ -bounded pixel-wise perturbation of a version of the original image that has been slightly rotated and in which a few random pixels have been flipped. Both of these methods require direct access to the attacked model (or a surrogate) to compute (or at least estimate) the gradient of the loss function with respect to the model's input.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…There has been a consensus that a general method to improve the robustness of neural networks is adversarial training [5,39,83,84]. It refers to the process of re-training a model with adversarial examples to improve its classification accuracy on modified and which is vulnerable to whitebox attack.…”
Section: Defense Methodsmentioning
confidence: 99%