2022
DOI: 10.1007/978-3-031-19775-8_38
|View full text |Cite
|
Sign up to set email alerts
|

Teaching with Soft Label Smoothing for Mitigating Noisy Labels in Facial Expressions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 42 publications
1
2
0
Order By: Relevance
“…This conclusion is consistent with previous own conclusions in [63,149] and has also been discussed by others [15,38,177]. In addition, the use of label smoothing is an established technique to improve the training of neural networks [106,107,116]. The ambiguity in the data could be a possible reason why and when it works.…”
Section: Research Question 41: "Is One Annotation Enough To Capture T...supporting
confidence: 91%
See 1 more Smart Citation
“…This conclusion is consistent with previous own conclusions in [63,149] and has also been discussed by others [15,38,177]. In addition, the use of label smoothing is an established technique to improve the training of neural networks [106,107,116]. The ambiguity in the data could be a possible reason why and when it works.…”
Section: Research Question 41: "Is One Annotation Enough To Capture T...supporting
confidence: 91%
“…For α = 1, the uniform probability distribution of 1/K is preferred over all previous distribution information of P(L x = k). This approach is successfully applied in many DL approaches to improve the overall performance [89,182], to mitigate annotation errors [107] or as a regularization [106]. The literature is divided on when or why label smoothing helps [116,192].…”
Section: Soft Labels and Label Smoothingmentioning
confidence: 99%
“…Therefore, it is significant to build a robust Facial Expression Recognition (FER) System. In recent years, many FER methods [4,29,30,32,38,43,50,54] achieved state-of-the-art performance on several benchmark datasets (e.g. RAF-DB [31], SFEW [48] and AffectNet [34]).…”
Section: Introductionmentioning
confidence: 99%