Abstract:The term "attribute transfer" refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes. Prominent example applications are photo realistic changes of facial features and expressions, like changing the hair color, adding a smile, enlarging the nose or altering the entire context of a scene, like transforming a summer landscape into a winter panorama. Recent advances in attrib… Show more
“…CGAN (Mirza and Osindero, 2014) was the first work to introduce conditions on GANs, shortly followed by a flurry of works ever since. There have been many different forms of conditional image generation, including class-based (Mirza and Osindero, 2014;Odena et al, 2017;Brock et al, 2018) , imagebased (Isola et al, 2017;Huang et al, 2018;Mao et al, 2019) , mask-and bounding box-based (Hinz et al, 2019;Park et al, 2019;Durall et al, 2020), as well as text-based (Reed et al, 2016;Xu et al, 2018;Hong et al, 2018). This intensive research has led to impressive development of a huge variety of techniques, paving the road towards the challenging task of generating more complex scenes.…”
Generative adversarial networks are the state of the art approach towards learned synthetic image generation. Although early successes were mostly unsupervised, bit by bit, this trend has been superseded by approaches based on labelled data. These supervised methods allow a much finer-grained control of the output image, offering more flexibility and stability. Nevertheless, the main drawback of such models is the necessity of annotated data. In this work, we introduce an novel framework that benefits from two popular learning techniques, adversarial training and representation learning, and takes a step towards unsupervised conditional GANs. In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model. In this way, we break the traditional dependency between condition and label, substituting the latter by unsupervised features coming from the latent space. Finally, we show that this new technique is able to produce samples on demand keeping the quality of its supervised counterpart.
“…CGAN (Mirza and Osindero, 2014) was the first work to introduce conditions on GANs, shortly followed by a flurry of works ever since. There have been many different forms of conditional image generation, including class-based (Mirza and Osindero, 2014;Odena et al, 2017;Brock et al, 2018) , imagebased (Isola et al, 2017;Huang et al, 2018;Mao et al, 2019) , mask-and bounding box-based (Hinz et al, 2019;Park et al, 2019;Durall et al, 2020), as well as text-based (Reed et al, 2016;Xu et al, 2018;Hong et al, 2018). This intensive research has led to impressive development of a huge variety of techniques, paving the road towards the challenging task of generating more complex scenes.…”
Generative adversarial networks are the state of the art approach towards learned synthetic image generation. Although early successes were mostly unsupervised, bit by bit, this trend has been superseded by approaches based on labelled data. These supervised methods allow a much finer-grained control of the output image, offering more flexibility and stability. Nevertheless, the main drawback of such models is the necessity of annotated data. In this work, we introduce an novel framework that benefits from two popular learning techniques, adversarial training and representation learning, and takes a step towards unsupervised conditional GANs. In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model. In this way, we break the traditional dependency between condition and label, substituting the latter by unsupervised features coming from the latent space. Finally, we show that this new technique is able to produce samples on demand keeping the quality of its supervised counterpart.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.