2020
DOI: 10.48550/arxiv.2003.14348
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

UniformAugment: A Search-free Probabilistic Data Augmentation Approach

Tom Ching LingChen,
Ava Khonsari,
Amirreza Lashkari
et al.

Abstract: Augmenting training datasets has been shown to improve the learning effectiveness for several computer vision tasks. A good augmentation produces an augmented dataset that adds variability while retaining the statistical properties of the original dataset. Some techniques, such as AutoAugment and Fast AutoAugment, have introduced a search phase to find a set of suitable augmentation policies for a given model and dataset. This comes at the cost of great computational overhead, adding up to several thousand GPU… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
46
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(48 citation statements)
references
References 17 publications
(38 reference statements)
2
46
0
Order By: Relevance
“…To study the limitation of negative sample free self-supervised methods and look into the boundary of momentum updating for the encoder, we further strengthen the magnitudes of risky augmentations for BYOL. In particu- lar, UA [17] and RA with more augmentation policies and higher distortion magnitudes are selected. The results presented in Tab.…”
Section: Main Results and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…To study the limitation of negative sample free self-supervised methods and look into the boundary of momentum updating for the encoder, we further strengthen the magnitudes of risky augmentations for BYOL. In particu- lar, UA [17] and RA with more augmentation policies and higher distortion magnitudes are selected. The results presented in Tab.…”
Section: Main Results and Discussionmentioning
confidence: 99%
“…The number of grids for JigSaw(n) is n × n. RA(m, n) is the RandAugment [6] with n augmentation transformation of m magnitude. UA denotes the UniformAumgent [17]. Right: validation accuracy of kNN classification during pre-training.…”
Section: Introductionmentioning
confidence: 99%
“…Meanwhile, there are two approaches to textual data augmentation: data warping and synthetic over-sampling [60]. Recently, various methods have been proposed to search for augmentation policies for different tasks using reinforcement learning [11], and various other improved algorithms [37,38].…”
Section: Data Augmentationmentioning
confidence: 99%
“…The optimal transformation is dependent on the dataset. UniformAugment [8] augments images by uniformly sampling from the continuous space of augmentation transformations, hence, avoiding the costly search process for finding augmentation policies. Laskin et al [25] use two new augmentation schemes of random translation as well as random amplitude scaling alongside standard augmentation techniques (crop, cutout, flip, rotate, etc.).…”
Section: Data Augmentation For Enhancing Performancementioning
confidence: 99%
“…Data augmentation is an effective technique, often used for image classification to increase dataset size, help the model learn invariance, and regularize the model [8]. TUTOR targets non-image datasets.…”
Section: Introductionmentioning
confidence: 99%