2022
DOI: 10.3390/healthcare10122363
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Subjective Facial Emotion Recognition and “Facial Emotion Recognition Based on Multi-Task Cascaded Convolutional Network Face Detection” between Patients with Schizophrenia and Healthy Participants

Abstract: Patients with schizophrenia may exhibit a flat affect and poor facial expressions. This study aimed to compare subjective facial emotion recognition (FER) and FER based on multi-task cascaded convolutional network (MTCNN) face detection in 31 patients with schizophrenia (patient group) and 40 healthy participants (healthy participant group). A Pepper Robot was used to converse with the 71 aforementioned participants; these conversations were recorded on video. Subjective FER (assigned by medical experts based … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 36 publications
(41 reference statements)
0
0
0
Order By: Relevance
“…This requires robots to have a high degree of multimodal perception, as they need to understand the mental moods, goals, and character aspects of humans in order to provide appropriate feedback [7]. Along with facial recognition through convolutional neural networks, several studies have used bio-signals such as heart rate variability, electroencephalogram signals, and eye modality [8] [9]. Others have used visual-audio signals, lexicographic data, and questionnaire-based data [10].…”
Section: Introductionmentioning
confidence: 99%
“…This requires robots to have a high degree of multimodal perception, as they need to understand the mental moods, goals, and character aspects of humans in order to provide appropriate feedback [7]. Along with facial recognition through convolutional neural networks, several studies have used bio-signals such as heart rate variability, electroencephalogram signals, and eye modality [8] [9]. Others have used visual-audio signals, lexicographic data, and questionnaire-based data [10].…”
Section: Introductionmentioning
confidence: 99%