2021
DOI: 10.14445/22315381/ijett-v69i7p231
|View full text |Cite
|
Sign up to set email alerts
|

Consensual Collaborative Training And Knowledge Distillation Based Facial Expression Recognition Under Noisy Annotations

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 51 publications
0
2
0
Order By: Relevance
“…For the pooling layer input y, the pooling process is [9] pool � down max y i,j 􏼐 􏼑 􏼐 􏼑, i, j ∈ p, (5) where y represents the pooling layer input element, and do wn is the downsampling process, which retains the maximum value in the pooled area.…”
Section: Training Deep Convolutional Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…For the pooling layer input y, the pooling process is [9] pool � down max y i,j 􏼐 􏼑 􏼐 􏼑, i, j ∈ p, (5) where y represents the pooling layer input element, and do wn is the downsampling process, which retains the maximum value in the pooled area.…”
Section: Training Deep Convolutional Neural Networkmentioning
confidence: 99%
“…About the expression recognition since the beginning of the 21th century, based on a simple knowledge distillation scheme, Gera D uses a single network to complete label inference on large-scale facial expression datasets, and the proposed framework demonstrates its effectiveness and comprehensiveness in the noisy FER dataset [5]. Li et al fused energy feature vectors and facial feature vectors in their research study, used support vector machine (SVM) to classify the fused feature vector, and finally proved that the proposed method has higher accuracy and stronger generalization transformation ability [6].…”
mentioning
confidence: 99%