2011
DOI: 10.1109/tsmcb.2010.2082525
|View full text |Cite
|
Sign up to set email alerts
|

Automatically Detecting Pain in Video Through Facial Action Units

Abstract: In a clinical setting, pain is reported either through patient self-report or via an observer. Such measures are problematic as they are: 1) subjective, and 2) give no specific timing information. Coding pain as a series of facial action units (AUs) can avoid these issues as it can be used to gain an objective measure of pain on a frame-by-frame basis. Using video data from patients with shoulder injuries, in this paper, we describe an active appearance model (AAM)-based system that can automatically detect th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

7
145
0
4

Year Published

2011
2011
2019
2019

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 225 publications
(156 citation statements)
references
References 18 publications
7
145
0
4
Order By: Relevance
“…The majority of the existing works attempt to recognize AUs or certain AU combinations independently [4], [9], [7], [8], [11], [5], [24], [25]. A common limitation of these methods is that they construct independent AU classifiers that ignore the relations among the AUs.…”
Section: A Multiple Facial Au Detectionmentioning
confidence: 99%
See 4 more Smart Citations
“…The majority of the existing works attempt to recognize AUs or certain AU combinations independently [4], [9], [7], [8], [11], [5], [24], [25]. A common limitation of these methods is that they construct independent AU classifiers that ignore the relations among the AUs.…”
Section: A Multiple Facial Au Detectionmentioning
confidence: 99%
“…Based on how the AUspecific classifiers are designed, they can be divided into two main categories: (a) static modeling approaches, where each frame is evaluated independently [4], [9], [7], [8], [11], and (b) temporal modeling approaches, where temporal dynamics are explored within a video sequence [5], [24], [26]. Representatives of the first group commonly apply independent classifiers, e.g., support vector machine (SVM) [4], [9], and Adaboost [7] on the collected features, or use the notion of domain adaptation to develop personalized AU-classifiers [8]. Alternatively, in [11] sparse representations are employed to create a dictionary of facial images with certain AU combinations.…”
Section: A Multiple Facial Au Detectionmentioning
confidence: 99%
See 3 more Smart Citations