The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2020
DOI: 10.48550/arxiv.2002.03040
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Local Facial Attribute Transfer through Inpainting

Abstract: The term "attribute transfer" refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes. Prominent example applications are photo realistic changes of facial features and expressions, like changing the hair color, adding a smile, enlarging the nose or altering the entire context of a scene, like transforming a summer landscape into a winter panorama. Recent advances in attrib… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 23 publications
0
1
0
Order By: Relevance
“…CGAN (Mirza and Osindero, 2014) was the first work to introduce conditions on GANs, shortly followed by a flurry of works ever since. There have been many different forms of conditional image generation, including class-based (Mirza and Osindero, 2014;Odena et al, 2017;Brock et al, 2018) , imagebased (Isola et al, 2017;Huang et al, 2018;Mao et al, 2019) , mask-and bounding box-based (Hinz et al, 2019;Park et al, 2019;Durall et al, 2020), as well as text-based (Reed et al, 2016;Xu et al, 2018;Hong et al, 2018). This intensive research has led to impressive development of a huge variety of techniques, paving the road towards the challenging task of generating more complex scenes.…”
Section: Conditional Generative Adversarial Networkmentioning
confidence: 99%
“…CGAN (Mirza and Osindero, 2014) was the first work to introduce conditions on GANs, shortly followed by a flurry of works ever since. There have been many different forms of conditional image generation, including class-based (Mirza and Osindero, 2014;Odena et al, 2017;Brock et al, 2018) , imagebased (Isola et al, 2017;Huang et al, 2018;Mao et al, 2019) , mask-and bounding box-based (Hinz et al, 2019;Park et al, 2019;Durall et al, 2020), as well as text-based (Reed et al, 2016;Xu et al, 2018;Hong et al, 2018). This intensive research has led to impressive development of a huge variety of techniques, paving the road towards the challenging task of generating more complex scenes.…”
Section: Conditional Generative Adversarial Networkmentioning
confidence: 99%