2021
DOI: 10.3390/s21020589
|View full text |Cite
|
Sign up to set email alerts
|

Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model

Abstract: Facial Action Units (AUs) correspond to the deformation/contraction of individual facial muscles or their combinations. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Generating and analyzing AUs in 3D is particularly relevant for the potential applications it can enable. In this paper, we propose a solution for 3D AU detection and synthesis by developing on a newly defined 3D Morphable Model (3DMM) of the face. Differently from most of the 3DMMs… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 34 publications
(39 reference statements)
0
2
0
Order By: Relevance
“…3D FER methods have succeeded in mining prominent geometrical details by SPD (Symmetric Positive Matrix) [ 15 ], conformal mapping [ 3 ], depth map [ 16 ], and recently with the prevalent statistical 3D Morphable Model (3DMM) [ 17 ] and point sets [ 11 ]. Among these, Refs.…”
Section: Related Workmentioning
confidence: 99%
“…3D FER methods have succeeded in mining prominent geometrical details by SPD (Symmetric Positive Matrix) [ 15 ], conformal mapping [ 3 ], depth map [ 16 ], and recently with the prevalent statistical 3D Morphable Model (3DMM) [ 17 ] and point sets [ 11 ]. Among these, Refs.…”
Section: Related Workmentioning
confidence: 99%
“…Facial expressions play a primary role in interpersonal relations and are one fundamental way to convey our emotional state [7]. The automatic analysis of facial expressions focused first on images and videos, with rare examples using 3D data [1], [11], [26], [32]. These initial studies focused more on datasets with posed expressions [18], impersonated by actors.…”
Section: Introductionmentioning
confidence: 99%