2017 12th IEEE International Conference on Automatic Face &Amp; Gesture Recognition (FG 2017) 2017
DOI: 10.1109/fg.2017.14
|View full text |Cite
|
Sign up to set email alerts
|

What Will Your Future Child Look Like? Modeling and Synthesis of Hereditary Patterns of Facial Dynamics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(13 citation statements)
references
References 26 publications
0
13
0
Order By: Relevance
“…We extend our previous study [10] in many ways. Along with an extended literature, (1) we use intensity of facial action units (AUs) instead of facial landmark displacement for both expression matching and learning temporal dynamics, (2) we model the facial appearance in a holistic manner, rather than learning facial regions individually since a set of AUs can effectively describe the whole face during expression matching, (3) we extend our dataset including the UvA-NEMO Disgust Database [9] and generate disgust videos of children in addition to their smile videos, (4) we enhance the reliability of the kinship verification method used in our experiments, and (5) we perform analyses to evaluate the quality of synthesized expressions in terms of occurrence and intensity of AUs.…”
Section: Introductionmentioning
confidence: 54%
See 2 more Smart Citations
“…We extend our previous study [10] in many ways. Along with an extended literature, (1) we use intensity of facial action units (AUs) instead of facial landmark displacement for both expression matching and learning temporal dynamics, (2) we model the facial appearance in a holistic manner, rather than learning facial regions individually since a set of AUs can effectively describe the whole face during expression matching, (3) we extend our dataset including the UvA-NEMO Disgust Database [9] and generate disgust videos of children in addition to their smile videos, (4) we enhance the reliability of the kinship verification method used in our experiments, and (5) we perform analyses to evaluate the quality of synthesized expressions in terms of occurrence and intensity of AUs.…”
Section: Introductionmentioning
confidence: 54%
“…In order to synthesize videos of children from videos of the corresponding parents, we employ the kinship set [4] of the UvA-NEMO Smile [8] and Disgust [9] databases, which are the only available kinship video databases in the literature. In our previous study [10], we have evaluated our model solely on UvA-NEMO Smile database. In the current study, to show the generalizability of the proposed method for other facial expressions, we include an evaluation on UvA-NEMO Disgust database.…”
Section: Databasementioning
confidence: 99%
See 1 more Smart Citation
“…Similarly, autoencoder-based (AE) methods have the similar drawbacks for the solution. [10] aims to generate kinship faces by promoting facial dynamics (i.e., expression) along with visual appearances based on AE, thus it is able to transfer personal expressions to prospective children.…”
Section: Face Synthesismentioning
confidence: 99%
“…Out of the many recent works in automatic kinship recognition, only a few have attempted the kinship generation problem. Ertugrul et al [4] focused on generating the facial Corresponding author: Siyu Xia. (e-mail: xia081@gmail.com).…”
Section: Introductionmentioning
confidence: 99%