2022
DOI: 10.48550/arxiv.2202.05737
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improving Generalization via Uncertainty Driven Perturbations

Abstract: Recently Shah et al., 2020 pointed out the pitfalls of the simplicity bias-the tendency of gradientbased algorithms to learn simple models-which include the model's high sensitivity to small input perturbations, as well as sub-optimal margins. In particular, while Stochastic Gradient Descent yields max-margin boundary on linear models, such guarantee does not extend to nonlinear models. To mitigate the simplicity bias, we consider uncertainty-driven perturbations (UDP) of the training data points, obtained ite… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 30 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?