2019 IEEE International Conference on Systems, Man and Cybernetics (SMC) 2019
DOI: 10.1109/smc.2019.8914172
|View full text |Cite
|
Sign up to set email alerts
|

Memory Integrity of CNNs for Cross-Dataset Facial Expression Recognition

Abstract: Facial expression recognition is a major problem in the domain of artificial intelligence. One of the best ways to solve this problem is the use of convolutional neural networks (CNNs). However, a large amount of data is required to train properly these networks but most of the datasets available for facial expression recognition are relatively small. A common way to circumvent the lack of data is to use CNNs trained on large datasets of different domains and fine-tuning the layers of such networks to the targ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…Several studies have achieved high accuracy rates for facial expression recognition using CNNs. For instance, Shaees et al achieved 99.3% accuracy on thermal images using the Natural Visible and Infrared Expression (NVE) database and 98.3% accuracy on graphic illustrations using the Cohn-Kanade+ (CK+) database [28]. Similarly, Webb et al achieved a state-of-the-art classification rate of 99.52% on a combined corpus of datasets [29].…”
Section: Deep Learning Algorithmmentioning
confidence: 99%
“…Several studies have achieved high accuracy rates for facial expression recognition using CNNs. For instance, Shaees et al achieved 99.3% accuracy on thermal images using the Natural Visible and Infrared Expression (NVE) database and 98.3% accuracy on graphic illustrations using the Cohn-Kanade+ (CK+) database [28]. Similarly, Webb et al achieved a state-of-the-art classification rate of 99.52% on a combined corpus of datasets [29].…”
Section: Deep Learning Algorithmmentioning
confidence: 99%
“…Particularly, FER2013 [15], TFD [16] or, more recently, RAF-DB [65,66] datasets are good sources of additional data for FER tasks. Tannugi et al [67] and Li and Deng [68] pursued interesting work on cross-dataset generalization tasks by switching in turn source and target FER datasets and evaluating the performance of FER models. Li and Deng [68] showed that datasets are strongly biased, and they developed a novel architecture that can learn domain-invariant and discriminative features.…”
Section: Pre-training and Fine-tuning Of 2d Cnnsmentioning
confidence: 99%
“…We can particularly mention FER2013 (Goodfellow et al, 2013), TFD (Susskind et al, 2010) or more recently RAF-DB (Li et al, 2017a;Li & Deng, 2019) datasets as good sources of additional data for FER tasks. Besides, Tannugi et al (2019) and Li & Deng (2020) pursued interesting work on cross-dataset generalization task by switching in turn source and target FER datasets and evaluating performance of FER models. Li & Deng (2020) have shown that datasets are strongly biased and they have developed accordingly, novel architecture that can learn domain-invariant and discriminative features.…”
Section: Pre-training and Fine-tuning Of 2d-cnnsmentioning
confidence: 99%