2021
DOI: 10.1109/tip.2021.3089942
|View full text |Cite
|
Sign up to set email alerts
|

Delving Deep Into Label Smoothing

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
63
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 151 publications
(78 citation statements)
references
References 56 publications
0
63
0
Order By: Relevance
“…Label smoothing (Szegedy et al, 2016) is a widely adopted regularization factor (Zhang et al, 2021). As such, a more complex architecture that pro- Table 4: BLEU score on three language pairs of MuST-C tst-COMMON.…”
Section: Effect Of Label Smoothingmentioning
confidence: 99%
“…Label smoothing (Szegedy et al, 2016) is a widely adopted regularization factor (Zhang et al, 2021). As such, a more complex architecture that pro- Table 4: BLEU score on three language pairs of MuST-C tst-COMMON.…”
Section: Effect Of Label Smoothingmentioning
confidence: 99%
“…Originally the distribution is uniform across the labels, which is data independent. Recently, other variants of LS are also proposed that are able to incorporate the interrelation information from the data into the distribution (Zhong et al, 2016;Zhang et al, 2021;Krothapalli and Abbott, 2020). In this work, the technique is applied to generate soft labels with a distribution derived from domain knowledge since the classes in this task are clearly interrelated with each other.…”
Section: Related Workmentioning
confidence: 99%
“…We experimented with the following loss functions to provide a comprehensive evaluation of their impact on the multi-class classification task under study: (i) Categorical cross-entropy (CCE) loss; (ii) Categorical focal loss [8]; (iii) Kullback-Leibler (KL) divergence loss [26]; (iv) Categorical Hinge loss [27]; (v) Label-smoothed CCE loss [28]; (vi) Label-smoothed categorical focal loss [28], and (vii) Calibrated CCE loss [29]. We also propose several loss functions, as follows, that mitigate the issues with the existing loss functions when applied to the multi-class classification task under study: (i) CCE loss with entropy-based regularization; (ii) Calibrated negative entropy loss, (iii) Calibrated KL divergence loss; (iv) Calibrated categorical focal loss, and (v) Calibrated categorical Hinge loss.…”
Section: Classification Lossesmentioning
confidence: 99%