2020
DOI: 10.1609/aaai.v34i07.7000
|View full text |Cite
|
Sign up to set email alerts
|

Random Erasing Data Augmentation

Abstract: In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with most of the CNN-bas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
1,181
1
3

Year Published

2020
2020
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 2,079 publications
(1,198 citation statements)
references
References 15 publications
3
1,181
1
3
Order By: Relevance
“…Krizhevsky et al [9] adjusted RGB channels, transformed the image, and made horizontal reflections to reduce overfitting. Zhong et al [19] performed data augmentation using a method of erasing part of an image and filling the erased part with random values. Similarly, DeVries and Taylor [25] removed certain parts, but proposed a method to fill the erased parts with zeros instead of random values.…”
Section: Data Augmentationmentioning
confidence: 99%
See 1 more Smart Citation
“…Krizhevsky et al [9] adjusted RGB channels, transformed the image, and made horizontal reflections to reduce overfitting. Zhong et al [19] performed data augmentation using a method of erasing part of an image and filling the erased part with random values. Similarly, DeVries and Taylor [25] removed certain parts, but proposed a method to fill the erased parts with zeros instead of random values.…”
Section: Data Augmentationmentioning
confidence: 99%
“…In order to alleviate the aforementioned problem, many studies take advantage of augmentation of the existing data [9,[19][20][21][22][23][24]. Data augmentation is used for small amounts of data, but also unbalanced data.…”
Section: Introductionmentioning
confidence: 99%
“…An over-fitting problem might also happen, which refers to a model that is too closely fit to the training data and leads to the poor generalization of test data. Data augmentation and regularization methods that generate pseudo samples to increase the amount of training data and enrich the diversity of the training data were proposed to reduce the risk of over-fitting [23,26,30,31]. Niall et al [30] introduced data augmentation by changing the background to generate samples.…”
Section: Related Workmentioning
confidence: 99%
“…Three kinds of deep models are used in person ReID: Siamese networks [13,15,20], classification networks [21][22][23], and triplet networks [24,25]. Additionally, many works have focused on coping with specific issues, such as occlusion [26], misalignment [27][28][29] and over-fitting [19,23,30,31].…”
Section: Introductionmentioning
confidence: 99%
“…Generally speaking, we used the UNIT [11] (Unsupervised Image-to-image Translation) framework as the basis of our overall translation works and then treated a full image as a combination of instance and background domains. First, identity matrices were used to record location information of instance domains that needed to be translated individually, and we successfully segregated the instances from the given image and assigned the original location (instance areas) with the mean pixel value of the original image [13]. Next, we translated the segregated background and instance domains, respectively.…”
Section: Introductionmentioning
confidence: 99%