2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00386
|View full text |Cite
|
Sign up to set email alerts
|

SmoothMix: a Simple Yet Effective Data Augmentation to Train Robust Classifiers

Abstract: Prompt-based learning reformulates downstream tasks as cloze problems by combining the original input with a template. This technique is particularly useful in few-shot learning, where a model is trained on a limited amount of data. However, the limited templates and text used in few-shot prompt-based learning still leave significant room for performance improvement. Additionally, existing methods (Schick and Schütze, 2021c) using model ensembles can constrain the model efficiency. To address these issues, we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
42
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 34 publications
(42 citation statements)
references
References 67 publications
0
42
0
Order By: Relevance
“…Further details on the architecture are provided in the Supplementary. By default, for all datasets, we utilize p = 0.2, s = {2, 3, 4, 5}, α = 0.5, CIFAR-100 [16] as the intruder dataset, and SmoothMixS [18] as the patching technique. Moreover, β is set to 10 for Ped2 and 25 for the other datasets.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Further details on the architecture are provided in the Supplementary. By default, for all datasets, we utilize p = 0.2, s = {2, 3, 4, 5}, α = 0.5, CIFAR-100 [16] as the intruder dataset, and SmoothMixS [18] as the patching technique. Moreover, β is set to 10 for Ped2 and 25 for the other datasets.…”
Section: Methodsmentioning
confidence: 99%
“…Differently, our approach is not restricted to any predefined object classes, carries out the training in an end-to-end manner, and does not require any pretrained networks. Data Augmentation: Pseudo anomaly generation used in our method can also be viewed as a form of data augmentation technique, widely popular among image classification models, which manipulates training data to increase variety [2,17,18,53,56]. Typically, the class labels for the augmented data are derived from the already exiting classes in the dataset.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…1, where γ is a random number between 0 and 1. The differently-sized images differentiate this from gradual-integration methods, such as SmoothMix [17], by fully surrounding the smaller image within the larger image. Entire images are featured as a uniformly-interpolated subset of a larger image in ISM in order to ensure constant visibility of all landmarks of the sub-image, without unfairly skewing results towards more "shown" landmarks, as may be the case in an implementation of SmoothMix.…”
Section: Model Additionsmentioning
confidence: 99%
“…Baselines: We compare our method with some traditional classification models, including KNN, 72 Logistic Regression, 73 Random Forest, 74 Decision Tree, 75 SVM, 76 Extra Trees, 77 and Naïve Bayes. 78 We then compare against various deep-learning-based state-of-the-art baselines, including VGG16, 79 VGG19, 79 ResNet50, 53 ResNet101, 53 NASNetMobile, 80 NASNetLarge, 80 InceptionV3, 81 InceptionV4, 82 Xception, 83 DenseNet121, 84 CondenseNet74-4, 85 CondenseNet74-8, 85 EfficientNet-B7, 86 ResNet50 + AUGMIX, 1 ResNet101 + AUGMIX, 1 Noisy Student, 7 EfficientNet-B7 + AdvProp, 8 ANT + SIN, 87 ResNet50 + FeatMatch, 88 ResNet101 + FeatMatch, 88 ResNet50 + FMix, 89 ResNet101 + FMix, 89 ResNet50 + CutMix 90 + MoEx, 91 ResNet101 + CutMix 90 + MoEx, 91 Pretrained Transformers, 92 ResNet50 + SmoothMix, 93 and ResNet101 + SmoothMix. 93 Comparison with state-of-the-art methods: For traditional classification models, from Table 1, our methods are better than these traditional methods.…”
Section: Comparison Experimentsmentioning
confidence: 99%