2019
DOI: 10.3837/tiis.2019.11.015
|View full text |Cite
|
Sign up to set email alerts
|

Facial Action Unit Detection with Multilayer Fused Multi-Task and Multi-Label Deep Learning Network

Abstract: Facial action units (AUs) have recently drawn increased attention because they can be used to recognize facial expressions. A variety of methods have been designed for frontal-view AU detection, but few have been able to handle multi-view face images. In this paper we propose a method for multi-view facial AU detection using a fused multilayer, multi-task, and multi-label deep learning network. The network can complete two tasks: AU detection and facial view detection. AU detection is a multi-label problem and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…The deep learning model presented in [20] [21] fuses the multi-modal information with heterogenous signals to improve the stress detection performance. The collected video data, respiration, and ECG data are preprocessed and fed into a deep neural network along with facial feature sequences.…”
Section: Deep Learning Based Emotion Classification Modelsmentioning
confidence: 99%
“…The deep learning model presented in [20] [21] fuses the multi-modal information with heterogenous signals to improve the stress detection performance. The collected video data, respiration, and ECG data are preprocessed and fed into a deep neural network along with facial feature sequences.…”
Section: Deep Learning Based Emotion Classification Modelsmentioning
confidence: 99%
“…Emotion refers to a conscious mental reaction subjectively experienced as strong feeling typically accompanied by physiological and behavioral changes in the body [3]. To recognize a user's emotional state, several studies have applied different forms of input, such as speech, facial expression, video, text, and others [11,13,15,25,39,42,47]. Among the methods using these inputs, facial emotion recognition (FER) has been gaining substantial attention over the past decades.…”
Section: Introductionmentioning
confidence: 99%