2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2022
DOI: 10.1109/cvprw56347.2022.00478
|View full text |Cite
|
Sign up to set email alerts
|

Improving Robustness to Texture Bias via Shape-focused Augmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…In contrast, changes in architecture (e.g., using an attention layer or the biologically inspired CORnet model) did not have a clear effect. Other methods to improve the shape bias include mixing in edge maps as training stimuli and to steer the stylization of training images (Mummadi et al, 2021), applying separate textures to the foreground object and the background (Lee et al, 2022), penalizing reliance on texture with adversarial learning (Nam et al, 2021), training on a mix of sharp and blurry images (Yoshihara et al, 2021), adding a custom drop-out layer that removes activations in homogeneous areas (Shi et al, 2020), or adding new network branches that receive preprocessed input like edge-maps (Mohla et al, 2022;Ye et al, 2022).…”
Section: Classification Of Cue Conflict Stimulimentioning
confidence: 99%
“…In contrast, changes in architecture (e.g., using an attention layer or the biologically inspired CORnet model) did not have a clear effect. Other methods to improve the shape bias include mixing in edge maps as training stimuli and to steer the stylization of training images (Mummadi et al, 2021), applying separate textures to the foreground object and the background (Lee et al, 2022), penalizing reliance on texture with adversarial learning (Nam et al, 2021), training on a mix of sharp and blurry images (Yoshihara et al, 2021), adding a custom drop-out layer that removes activations in homogeneous areas (Shi et al, 2020), or adding new network branches that receive preprocessed input like edge-maps (Mohla et al, 2022;Ye et al, 2022).…”
Section: Classification Of Cue Conflict Stimulimentioning
confidence: 99%
“…By making texture-features less reliable across repetitions of the same image as well as across instances of the same object class, augmentations can be used to directly reduce the bias towards texture. Augmenting images to express object shape more vigorously has also been found to increase shape-bias (Lee et al, 2022; Gowda et al, 2022). Each of these approaches either manipulates the model architecture, implying that texture-bias is inherent to certain DNN architectures (like the commonly used ResNet50; He et al (2016)), or manipulates the input data by using augmentations that specifically target certain visual features.…”
Section: Related Workmentioning
confidence: 99%
“…CL has shown great promise in self-supervised regimes [6,9,13] while recently, it has also been applied to the supervised learning domain and achieved promising results [19,22,23]. CL has been used in a self-supervised manner to help debias models [21,26,32,37]. In the fully supervised learning domain, previous works have shown that utilizing contrastive loss as an auxiliary loss can encourage learning more robust features with higher generalization abilities through careful contrastive pair construction [22,23].…”
Section: Related Workmentioning
confidence: 99%