2017 8th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON) 2017
DOI: 10.1109/iemcon.2017.8117150
|View full text |Cite
|
Sign up to set email alerts
|

Real time automated facial expression recognition app development on smart phones

Abstract: Abstract-Automated facial expression recognition (AFER) is a crucial technology to and a challenging task for human computer interaction. Previous methods of AFER have incorporated different features and classification methods and use basic testing approaches. In this paper, we employ the best feature descriptor for AFER by empirically evaluating the feature descriptors named the Facial Landmarks descriptor and the Center of Gravity descriptor. We examine each feature descriptor by considering one classificati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(12 citation statements)
references
References 39 publications
0
5
0
Order By: Relevance
“…So, accuracy reached up to 97% but in the case of spontaneous data (data similar to the real-life situation) like FER2013 researcher got a maximum of 75-76% accuracy. Table 2 summarize the accuracy difference between posed [15,29,31,[40][41][42][43][44] and spontaneous datasets [17,19,30,[45][46][47]. Posed datasets always get greater accuracy than spontaneous but are less reliable in real-life applications.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…So, accuracy reached up to 97% but in the case of spontaneous data (data similar to the real-life situation) like FER2013 researcher got a maximum of 75-76% accuracy. Table 2 summarize the accuracy difference between posed [15,29,31,[40][41][42][43][44] and spontaneous datasets [17,19,30,[45][46][47]. Posed datasets always get greater accuracy than spontaneous but are less reliable in real-life applications.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…It includes a set of 4,900 images in facial expressions from 70 subjects (35 female and 35 male), displaying seven emotions (angry, fearful, disgusted, happy, sad, surprised, and neutral) [86], using faces of subjects of age between 20-30 years. During the session, intrusive elements causing any disruption were excluded from their faces such as earrings, facial hair, moustaches, jewelry, makeup, beards, or glasses [87]. Each expression was photographed from five different angles and was recorded two times.…”
Section: Karolinska Directed Emotional Faces (Kdef)mentioning
confidence: 99%
“…Jabid et al [22] investigated the local directional pattern (LDP), an appearance-based feature extraction method. Alshami et al [23] SVM method was used to study two feature descriptors, facial landmarks descriptor and center of gravity descriptor. SVM and numerous other methods (e.g., KNN, LDA) were taken into account in the comparative analysis conducted by Liew and Yairi [11] for the classification of features extracted using various methods, including Gabor, Haar, and LBP.…”
Section: Machine Learning-based Fer Approachmentioning
confidence: 99%
“…In addition, Porcu et al [38] used different data augmentation techniques, including synthetic images to train deep CNN architectures and combined synthetic images with other methods to perform the FER system better. Existing deep learning-based methods have also taken frontal images into account, and most studies have even excluded profile view images from the data set in experiments to simplify the task [11,23,30,39].…”
Section: Deep Learning-based Fer Approachmentioning
confidence: 99%