2020
DOI: 10.48550/arxiv.2011.12562
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Delving Deep into Label Smoothing

Chang-Bin Zhang,
Peng-Tao Jiang,
Qibin Hou
et al.

Abstract: Label smoothing is an effective regularization tool for deep neural networks (DNNs), which generates soft labels by applying a weighted average between the uniform distribution and the hard label. It is often used to reduce the overfitting problem of training DNNs and further improve classification performance. In this paper, we aim to investigate how to generate more reliable soft labels. We present an Online Label Smoothing (OLS) strategy, which generates soft labels based on the statistics of the model pred… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 51 publications
(90 reference statements)
0
2
0
Order By: Relevance
“…VII, we report the accuracy for three common computer vision datasets, including CIFAR10, CIFAR100, and Tiny-ImageNet. Compared to CE, LS [19], Online LS [53] and Disturb Label [62], our CBLS shows superior results.…”
Section: Ablation Studymentioning
confidence: 97%
“…VII, we report the accuracy for three common computer vision datasets, including CIFAR10, CIFAR100, and Tiny-ImageNet. Compared to CE, LS [19], Online LS [53] and Disturb Label [62], our CBLS shows superior results.…”
Section: Ablation Studymentioning
confidence: 97%
“…The approach has been tested on a dataset made of sound recordings and a remarkable performance has been achieved [16]. Zhang et al have proposed a label smoothing approach to generate soft labels by applying weighted average between distribution and hard labels [17].…”
Section: Related Workmentioning
confidence: 99%
“…Other works that use label smoothing as a calibration benchmark [10,12,13] very briefly discuss validating the smoothing factor, however it is generally minimal with at most three values considered and the chosen smoothing factor is then used across all experiments (model architectures, datasets, training setups). Zhang et al [15] propose an online label smoothing strategy for image classification that implicitly measures class similarity and generates soft labels based on the statistics of the model prediction for the target category. In contrast, Liu and JaJa [25], calculates class similarity explicitly prior to training and uses these scores to obtain smoothing factors for each target class.…”
Section: Related Workmentioning
confidence: 99%