2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01358
|View full text |Cite
|
Sign up to set email alerts
|

SuperMix: Supervising the Mixing Data Augmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 60 publications
(26 citation statements)
references
References 16 publications
0
26
0
Order By: Relevance
“…In other works, authors explore using nonlinear or optimizable interpolation mixup policies, such as PuzzleMix [16], Co-Mixup [15], AutoMix [27], and SAMix [23]. Moreover, mixup methods extend to more than two elements [15,6], and are utilized in contrastive learning to learn discriminative visual representation [14,20,35,23].…”
Section: Related Workmentioning
confidence: 99%
“…In other works, authors explore using nonlinear or optimizable interpolation mixup policies, such as PuzzleMix [16], Co-Mixup [15], AutoMix [27], and SAMix [23]. Moreover, mixup methods extend to more than two elements [15,6], and are utilized in contrastive learning to learn discriminative visual representation [14,20,35,23].…”
Section: Related Workmentioning
confidence: 99%
“…To address both problems, instead of storing whole images from the previous tasks {1, ..., t − 1}, we propose to store an informative portion that we will mix with the images of the current task t. Image mixing is popular for classification [93], [94], [95], [96], [97], [98] yet, to the best of our knowledge, sees limited use for semantic segmentation [99], [100], [101], [102], [103], and has never been considered to design memory-efficient rehearsal learning systems. Formally, given an image I and the corresponding ground truth segmentation maps S t , we define a binary mask Π c such that ∀c ∈ C t : with O c the selected object for class c. By nature, this patch is extremely sparse and can be efficiently stored on disk by modern compression algorithms [104].…”
Section: Object Rehearsalmentioning
confidence: 99%
“…Other works have added random noise to labels to alleviate overfitting [46], some techniques rotate all the images in a class and consider the newly rotated class as distinct from its parent class. Recent works [12,51,41,15,45] are recording better performance values when augmentation strategies are injected within the meta-learning pipeline.…”
Section: Related Workmentioning
confidence: 99%
“…This section builds on the findings of section 5.1, where we established three core data augmentation cases; support, query and task data augmentation. Similar to [41,17,12], we used the CutMix, SelfMix, MixUp, Random Crop and Horizontal Flip augmentation methods on the support, query and task datasets, respectively. We identified the best augmentation combinations that suit a few-shot learner and with our findings, we picked the best strategy to determine which mode of augmentation suits a DBT-regularized fewshot learner.…”
Section: Augmentation Modesmentioning
confidence: 99%