Proceedings of the 26th ACM International Conference on Multimedia 2018
DOI: 10.1145/3240508.3240618
|View full text |Cite
|
Sign up to set email alerts
|

BeautyGAN

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 165 publications
(19 citation statements)
references
References 13 publications
0
9
0
Order By: Relevance
“…Style transfer [4] is an image-to-image translation technique that aims to separate and recombine the content and style information of images. Built on the style transfer framework, makeup transfer [1,5,12,19,23] is proposed to transfer the makeup style of the reference image to the source image while keeping the result of face recognition unchanged. Both style transfer and makeup transfer rely on the cycle consistency loss or its variants [43,45] to maintain the stability of source images.…”
Section: Style Transfer and Makeup Transfermentioning
confidence: 99%
“…Style transfer [4] is an image-to-image translation technique that aims to separate and recombine the content and style information of images. Built on the style transfer framework, makeup transfer [1,5,12,19,23] is proposed to transfer the makeup style of the reference image to the source image while keeping the result of face recognition unchanged. Both style transfer and makeup transfer rely on the cycle consistency loss or its variants [43,45] to maintain the stability of source images.…”
Section: Style Transfer and Makeup Transfermentioning
confidence: 99%
“…Then, this product representation is decoded and rendered on the source image using generative models such as generative advesarial networks (GANs) [GPAM ∗ 20] or variational autoencoders (VAEs) [KW14]. In particular, this idea has been successfully used for makeup transfer [LQD ∗ 18, JLG ∗ 20], hair synthesis [SDS ∗ 21, KCP ∗ 21] and is rapidly emerging in the field of fashion articles [JB17]. Recent methods attempt to provide controllable rendering [KPGB20], or propose to leverage additional scene information in their models, such as segmentation masks for fashion items [CPLC21,GSG ∗ 21] or UV maps for makeup [NTH21].…”
Section: Related Workmentioning
confidence: 99%
“…This task consists in extracting a product appearance from a single reference image and synthesizing it on the image of another person. However, existing methods in this domain are often based on large generative networks that suffer from temporal inconsistencies, and cannot be used to process a video stream in real‐time on mobile devices [KJB ∗ 21,LQD ∗ 18].…”
Section: Introductionmentioning
confidence: 99%
“…Human evaluation studies, on the other hand, tend to focus on comparing the outputs of different algorithms, again ignoring the importance of context on the evaluation results. Typically, participants are asked to rank pictures produced using different algorithms from best to worst (Li et al 2018;Liu et al 2017;Zhou et al 2019b) or rate the pictures by user perception metrics, such as realism, overall quality, and identity (Yin et al 2017;Zhou et al 2019b). For example, Li et al (2018) recruited 84 volunteers to rank three generated images out of 10 non-makeup and 20 makeup test images based on quality, realism, and makeup style similarity.…”
Section: Lack Of Evaluation Studies For Artificial Picturesmentioning
confidence: 99%
“…Typically, participants are asked to rank pictures produced using different algorithms from best to worst (Li et al 2018;Liu et al 2017;Zhou et al 2019b) or rate the pictures by user perception metrics, such as realism, overall quality, and identity (Yin et al 2017;Zhou et al 2019b). For example, Li et al (2018) recruited 84 volunteers to rank three generated images out of 10 non-makeup and 20 makeup test images based on quality, realism, and makeup style similarity. Lee et al (2018) employed a similar approach by asking users which image is more realistic out of samples created using different generation methods.…”
Section: Lack Of Evaluation Studies For Artificial Picturesmentioning
confidence: 99%