2022
DOI: 10.48550/arxiv.2203.10761
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Decoupled Mixup for Data-efficient Learning

Abstract: Mixup is an efficient data augmentation approach that improves the generalization of neural networks by smoothing the decision boundary with mixed data. Recently, dynamic mixup methods improve previous static policies (e.g., linear interpolation) by maximizing discriminative regions or maintaining the salient objects in mixed samples. We notice that The mixed samples from dynamic policies are more separable than the static ones while preventing models from overfitting. Inspired by this finding, we first argue … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 28 publications
(46 reference statements)
0
2
0
Order By: Relevance
“…We compared some previous state-of-the-art network architectures with TbsNet ( Table 7 ). These architectures employ some training techniques such as AutoMix ( Liu et al, 2022c ), SAMix ( Li et al, 2021 ), PuzzleMix+DM ( Liu et al, 2022b ), DCL ( Luo et al, 2021 ), etc . Since no training techniques are employed (vanilla training scheme), the TbsNet results on Tiny-ImageNet ( Le & Yang, 2015 ) are not very nice.…”
Section: Benchmarkingmentioning
confidence: 99%
“…We compared some previous state-of-the-art network architectures with TbsNet ( Table 7 ). These architectures employ some training techniques such as AutoMix ( Liu et al, 2022c ), SAMix ( Li et al, 2021 ), PuzzleMix+DM ( Liu et al, 2022b ), DCL ( Luo et al, 2021 ), etc . Since no training techniques are employed (vanilla training scheme), the TbsNet results on Tiny-ImageNet ( Le & Yang, 2015 ) are not very nice.…”
Section: Benchmarkingmentioning
confidence: 99%
“…Data augmentation is designed based on domain knowledge. For example, in CV datasets, operations such as color jittering [42], random cropping [4], applying Gaussian blur [9], Mixup [24,21,25] are proven useful. In biology and some easy datasets, linear combinations τ lc (•) in k-nearest neighbor data is a simple and effective way.…”
Section: Dlme Frameworkmentioning
confidence: 99%