2021
DOI: 10.3389/fpsyg.2021.759485
|View full text |Cite
|
Sign up to set email alerts
|

Facial Expression Emotion Recognition Model Integrating Philosophy and Machine Learning Theory

Abstract: Facial expression emotion recognition is an intuitive reflection of a person’s mental state, which contains rich emotional information, and is one of the most important forms of interpersonal communication. It can be used in various fields, including psychology. As a celebrity in ancient China, Zeng Guofan’s wisdom involves facial emotion recognition techniques. His book Bing Jian summarizes eight methods on how to identify people, especially how to choose the right one, which means “look at the eyes and nose … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 32 publications
(12 citation statements)
references
References 30 publications
0
12
0
Order By: Relevance
“…In automatic recognition of emotions, the authors used migratory learning to generate models of specific subjects to extract emotional content from facial images in the valence/wake dimension (Rescigno et al, 2020). In dual-channel emoticon recognition, machine learning theory and a philosophicalthought-based feature fusion dual-channel emoticon recognition algorithm were proposed (Song, 2021). In the facial emotion recognition system, the authors used feature extraction based on scale-invariant feature transformation to extract features from face points (Sreedharan et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…In automatic recognition of emotions, the authors used migratory learning to generate models of specific subjects to extract emotional content from facial images in the valence/wake dimension (Rescigno et al, 2020). In dual-channel emoticon recognition, machine learning theory and a philosophicalthought-based feature fusion dual-channel emoticon recognition algorithm were proposed (Song, 2021). In the facial emotion recognition system, the authors used feature extraction based on scale-invariant feature transformation to extract features from face points (Sreedharan et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…So, accuracy reached up to 97% but in the case of spontaneous data (data similar to the real-life situation) like FER2013 researcher got a maximum of 75-76% accuracy. Table 2 summarize the accuracy difference between posed [15,29,31,[40][41][42][43][44] and spontaneous datasets [17,19,30,[45][46][47]. Posed datasets always get greater accuracy than spontaneous but are less reliable in real-life applications.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Kim et al [29] proposed a hierarchical deep neural network-based FER system in which they fused appearance and geometric-based features. Then Song [30] proposed a feature fusion model based on machine learning and philosophical concepts. Similarly, Park et al [31] constructed 3D CNN architecture for extracting spatial and temporal features simultaneously.…”
Section: Introductionmentioning
confidence: 99%
“…Human FER plays a significant role in understanding people's nonverbal ways of communicating with others [ 19 ]. It has attracted the interest of scientific populations in various fields due to its superiority among other forms of emotion recognition [ 22 ]. As it is not only limited to human-computer interactions or human-robot interactions, facial expression analysis has become a popular research topic in various health care areas, such as the diagnosis or assessment of cognitive impairment (eg, autism spectrum disorders in children), depression monitoring, pain monitoring in Parkinson Disease, and clinical communication in doctor-patient consultations [ 27 ].…”
Section: Methodsmentioning
confidence: 99%
“…Many researchers have been studying facial expressions by using automatic facial emotion recognition (FER) to gain a better understanding of the human emotions linked with empathy [ 20 - 24 ]. They have proposed various machine learning algorithms, such as support vector machines, Bayesian belief networks, and neural network models, for recognizing and describing emotions based on observed facial expressions recorded on images or videos [ 20 - 22 ]. Although mounting literature has been introduced on machine learning and deep learning for automatically extracting emotions from the human face, developing a highly accurate FER system requires a lot of training data and a high-quality computational system [ 21 ].…”
Section: Introductionmentioning
confidence: 99%