2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00129
|View full text |Cite
|
Sign up to set email alerts
|

Achieving Robustness in the Wild via Adversarial Mixing With Disentangled Representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 41 publications
(33 citation statements)
references
References 15 publications
0
31
0
Order By: Relevance
“…Instead of using heuristic data augmentations which cannot be used to learn more complex transformations, the desired augmentation can be learnt. The training data can be augmented by transforming samples using a generative model to either come from another part of the domain conditioned on an attribute (Goel et al, 2020), another domain (Zhou et al, 2020), be domain agnostic (Carlucci et al, 2019), or to have a different style (Gowal et al, 2020;Geirhos et al, 2019). These methods often build on work in image generation, such as CYCLEGAN and STYLEGAN (Karras et al, 2019).…”
Section: Literature Reviewmentioning
confidence: 99%
“…Instead of using heuristic data augmentations which cannot be used to learn more complex transformations, the desired augmentation can be learnt. The training data can be augmented by transforming samples using a generative model to either come from another part of the domain conditioned on an attribute (Goel et al, 2020), another domain (Zhou et al, 2020), be domain agnostic (Carlucci et al, 2019), or to have a different style (Gowal et al, 2020;Geirhos et al, 2019). These methods often build on work in image generation, such as CYCLEGAN and STYLEGAN (Karras et al, 2019).…”
Section: Literature Reviewmentioning
confidence: 99%
“…Bhattad et al [2] leverage pre-trained colourisation and texturetransfer models to adversarially change the colours and textures of an image. A number of publications in some way exploit generative models with disentangled latent spaces, be it by using Fader Networks [24], using a dataset with labelled attributes to train a conditional generator [36], or using a StyleGAN and partitioning the latent space according to whether or not it should influence the label [15]. Selecting the features to perturb like this allows for precise control over these features, but like hand-crafted perturbations, result in narrow kinds of changes to images.…”
Section: Related Workmentioning
confidence: 99%
“…However, in this paper we make no particular assumption on the specific form of ∆. In particular, our results apply to arbitrary perturbation sets, such as those used in [62][63][64][65][66].…”
Section: Problem Formulationmentioning
confidence: 99%