2016
DOI: 10.1109/tip.2016.2539502
|View full text |Cite
|
Sign up to set email alerts
|

Discriminant Incoherent Component Analysis

Abstract: Abstract-Face images convey rich information which can be perceived as a superposition of low-complexity components associated with attributes, such as facial identity, expressions and activation of facial action units (AUs). For instance, low-rank components characterizing neutral facial images are associated with identity, while sparse components capturing non-rigid deformations occurring in certain face regions reveal expressions and action unit activations. In this paper, the Discriminant Incoherent Compon… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 68 publications
0
7
0
Order By: Relevance
“…Furthermore, some researchers are critical of the grid-based feature extraction, suggesting that the sub-regions are not necessarily well aligned with meaningful facial features [53]. Motivated by these findings and other recent works [54,33,55], in this study we adopt a hybrid approach to appearance feature extraction. In particular, we first apply the same transformation used for point registration to the pixel intensities of each face image to remove translation, scale and in-plane rotation effects.…”
Section: Accepted M Manuscriptmentioning
confidence: 97%
See 1 more Smart Citation
“…Furthermore, some researchers are critical of the grid-based feature extraction, suggesting that the sub-regions are not necessarily well aligned with meaningful facial features [53]. Motivated by these findings and other recent works [54,33,55], in this study we adopt a hybrid approach to appearance feature extraction. In particular, we first apply the same transformation used for point registration to the pixel intensities of each face image to remove translation, scale and in-plane rotation effects.…”
Section: Accepted M Manuscriptmentioning
confidence: 97%
“…Moreover, due to the involved parties being engaged in naturalistic competitive conversations, the subjects often perform abrupt and extreme head movements (e.g., head nods, shakes, tilts), body movements (e.g., forward/backward leaning, spinning periodically on their swivel chairs) and gestures (e.g., hand crosses, hand wags). The aforementioned conditions pose obstacles to the computer vision pre-processing tasks, such as face detection, facial point tracking and registration [30,31], since the latter have to cope with frequent and large out-of-plane head rotations and occlusions [32,33]. Annotations.…”
Section: Accepted Manuscriptmentioning
confidence: 99%
“…However, such methods require multiple facial landmarks to be pre-identified and do not propose a unified framework for dealing with hidden information, such as teeth, that is usually added in a separate, post-processing step. For the case of learning-based techniques, FES has received relatively less attention compared to expression recognition or face recognition across varying expressions (Zeng et al 2009;Jain and Li 2011;Georgakis et al 2016). Cootes et al (2001) combined shape and texture information into an Active Appearance Model (AAM).…”
Section: Related Workmentioning
confidence: 99%
“…The least resulting reconstruction residual thereupon determines its identity or expression. We refer readers to [12] for the exact problem set-up and implementation details. Table 2 collects the computed recognition rates.…”
Section: Face and Facial Expression Recognitionmentioning
confidence: 99%