2013
DOI: 10.1109/t-affc.2013.4
|View full text |Cite
|
Sign up to set email alerts
|

DISFA: A Spontaneous Facial Action Intensity Database

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
390
0
1

Year Published

2014
2014
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 637 publications
(392 citation statements)
references
References 53 publications
1
390
0
1
Order By: Relevance
“…Although deep learning algorithms have been shown to produce state of the art performance on object recognition tasks, there has been considerably less work on using deep learning techniques in action recognition, facial expression recognition, and in particular facial AU recognition. With the increasing availability of large databases for AU recognition [16,23,29], it would be interesting to see if deep learning algorithms can give a similar leap in performance in the field of facial expression/AU recognition.…”
Section: Introductionmentioning
confidence: 99%
“…Although deep learning algorithms have been shown to produce state of the art performance on object recognition tasks, there has been considerably less work on using deep learning techniques in action recognition, facial expression recognition, and in particular facial AU recognition. With the increasing availability of large databases for AU recognition [16,23,29], it would be interesting to see if deep learning algorithms can give a similar leap in performance in the field of facial expression/AU recognition.…”
Section: Introductionmentioning
confidence: 99%
“…Evaluation of the proposed approach is performed on the Denver Intensity of Spontaneous Facial Actions (DISFA) dataset [26], the recently published dataset of naturalistic facial AUs that are FACS coded in terms of their intensity using the ordinal scores: 0 (not present) to 5 (maximum intensity). This dataset consists of video recordings of Fig.…”
Section: Methodsmentioning
confidence: 99%
“…Based on the modeling approach, these can be divided into static methods (Mahoor et al 2009, Mavadati et al 2013, Savrana et al 2012, Kaltwang et al 2012, Jeni et al 2013) and dynamic methods (Rudovic et al 2013b). The static methods can further be divided into classification-based methods (e.g., (Mahoor et al 2009, Mavadati et al 2013) and regression-based (e.g, (Savrana et al 2012, Kaltwang et al 2012, Jeni et al 2013). The static classification-based methods (Mahoor et al 2009, Mavadati et al 2013) perform multi-class classification of the intensity of AUs using the SVM classifier.…”
Section: Intensity Estimation Of Facial Expressionsmentioning
confidence: 99%
“…The static methods can further be divided into classification-based methods (e.g., (Mahoor et al 2009, Mavadati et al 2013) and regression-based (e.g, (Savrana et al 2012, Kaltwang et al 2012, Jeni et al 2013). The static classification-based methods (Mahoor et al 2009, Mavadati et al 2013) perform multi-class classification of the intensity of AUs using the SVM classifier. For example, (Mahoor et al 2009) performed the intensity estimation of AU6 (cheek raiser) and AU12 (lip corner puller) from facial images of infants.…”
Section: Intensity Estimation Of Facial Expressionsmentioning
confidence: 99%