2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2017
DOI: 10.1109/cvprw.2017.280
|View full text |Cite
|
Sign up to set email alerts
|

DyadGAN: Generating Facial Expressions in Dyadic Interactions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0
1

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 60 publications
(31 citation statements)
references
References 16 publications
0
30
0
1
Order By: Relevance
“…One similar work that uses a probabilistic method is DyadGAN [25], which trained a conditional GAN to generate face images based on the interlocutor's facial expressions. However, the work only produced a single image, ignoring temporal aspects.…”
Section: Interlocutor-aware Gesture Generationmentioning
confidence: 99%
See 1 more Smart Citation
“…One similar work that uses a probabilistic method is DyadGAN [25], which trained a conditional GAN to generate face images based on the interlocutor's facial expressions. However, the work only produced a single image, ignoring temporal aspects.…”
Section: Interlocutor-aware Gesture Generationmentioning
confidence: 99%
“…As modeling conversational dynamics is difficult to achieve, most non-verbal behavior generation methods only use speech and/or semantic content produced by the agent as inputs to the system [27,32,45]. Recently a few systems have been introduced that use non-verbal behaviors from the interlocutor to control non-verbal output from the system [1,17,20,25]. We continue this line of work and present a probabilistic system, based on normalizing flows, for generating facial gestures in dyadic settings.…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, a range of metric-based approaches have been proposed to quantitatively evaluate the adversarial training frameworks. For example, the authors in [51] compared the intra-set and inter-set average Euclidean distances between different sets of the generated faces. Similarly, to quantitatively evaluate models for emotion conversion, other evaluation measurements raised in the literature include BiLingual Evaluation Understudy (BLEU) [52] and Recall-Oriented Understudy for Gisting Evaluation (ROUGE) [53] for text, and a signal-to-noise ratio test for speech [15].…”
Section: Performance Evaluationmentioning
confidence: 99%
“…Radfod et al [19] 2016 SYN image DCGAN vector arithmetic can be done in latent vector space, e. g., smiling woman -neutral woman + neutral man = smiling man Chen et al [69] 2016 SYN image infoGAN latent code can be interpreted; support gradual transformation Huang & Khan [51] 2017 SYN image/video DyadGAN interaction scenario; identity + attribute from the interviewee Melis & Amores [83] 2017 SYN Image (art) cGAN generate emotional artwork Bao et al [49] 2018 SYN image identity preserving GAN the identity and attributes of faces are separated Pham et al [81] 2018 SYN image/video GATH conditioned by AU; from static image to dynamic video Song et al [82] 2018 SYN image/video cGAN static image to dynamic video, conditioned by audio sequences Nojavansghari et al [50] 2018 SYN image DyadGAN (with RNN) and sentiment analysis has yet to be reported, to answer questions such as which model is faster, more accurate, or easier to implement. This absence is mainly due to the lack of benchmark datasets and thoughtfully designed metrics for each specific application (i. e., emotion generation, emotion conversion, and emotion perception and understanding).…”
Section: B Other Ongoing Breakthroughsmentioning
confidence: 99%
“…Bavelas [4] clearly distinguished the different functions of feedback and defined two types of feedback: generic and specific. Generic feedback simply signals that the listener is [12] A:"Something like …" [11] C:"How about Sushi …" [10] B:"What do you …" [13] C:"But, this may…" [14] B: "That's right"…”
Section: Introductionmentioning
confidence: 99%