2018
DOI: 10.48550/arxiv.1810.05724
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unpaired High-Resolution and Scalable Style Transfer Using Generative Adversarial Networks

Abstract: Neural networks have proven their capabilities by outperforming many other approaches on regression or classification tasks on various kinds of data. Other astonishing results have been achieved using neural nets as data generators, especially in settings of generative adversarial networks (GANs). One special application is the field of image domain translations. Here, the goal is to take an image with a certain style (e. g. a photography) and transform it into another one (e. g. a painting). If such a task is… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 20 publications
(26 reference statements)
0
6
0
Order By: Relevance
“…Zhang and Dana (2017) proposed a multi-style generative network by introducing a CoMatch layer that predicts the second-order feature statistics with the target style, which achieves superior image quality comparing to stateof-the-art approaches. Junginger et al (2018) proposed unpaired high-resolution and scalable-style transfer utilizing GANs, which helps to preserve local image details while also maintaining global consistency. Karras et al (2019) proposed a style-based generator architecture for GANs, which achieves intuitive, scale-specific synthetic control.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Zhang and Dana (2017) proposed a multi-style generative network by introducing a CoMatch layer that predicts the second-order feature statistics with the target style, which achieves superior image quality comparing to stateof-the-art approaches. Junginger et al (2018) proposed unpaired high-resolution and scalable-style transfer utilizing GANs, which helps to preserve local image details while also maintaining global consistency. Karras et al (2019) proposed a style-based generator architecture for GANs, which achieves intuitive, scale-specific synthetic control.…”
Section: Related Workmentioning
confidence: 99%
“…The scene structure and object boundary of the content image needs to be retained for a high-quality IST, and the appearance should be aligned with the style image. In recent years, IST technologies (Gatys et al, 2016;Ghiasi et al, 2017;Junginger et al, 2018;Karras et al, 2019;Li et al, 2018;Li & Wand, 2016;Liao & Huang, 2022;Qiao et al, 2021;Wang et al, 2020;Yao et al, 2019;Yeh et al, 2020;Zhang & Dana, 2017) based on deep learning (DL) have demonstrated that the relevance among features obtained by CNNs is very available for obtaining visual contents and styles, and is utilized to obtain images with similar contents and styles.…”
Section: Introductionmentioning
confidence: 99%
“…More importantly, the decrease of model size enables universal style transfer on ultra-resolution images. To our best knowledge, only one recent work [30] employs GAN [16] to learn unpaired style transfer network on ultra-resolution images. However, they achieve this by working on image subsamples and then merging them back to a whole image.…”
Section: Model Compression Model Compression and Accelerationmentioning
confidence: 99%
“…Let z T and y be the latent code at γ My (T ) and the ground truth, respectively. We generate two random sets Z and Ỹ using the distribution, Z = N (z T ; ) and Ỹ = Ψ(y), (12) where Ψ(•) applies random perturbations such as brightness, contrast and small noise, and 0 < 2 Î 1. One trivial method to ensure that a bijective mapping exists is to apply a loss function y i − G(z i ) , ∀z i ∈ Z, y i ∈ Ỹ to update the generator.…”
Section: Encouraging the Local Homemorphismmentioning
confidence: 99%
“…A natural extension from unconditional GANs to conditional GANs (cGAN) [24,11] can be achieved by conditioning both the discriminator and the generator on a conditioning signal x ∈ X . Recently, conditional generative modeling has made substantial progress in a diverse set of tasks including image-to-image translation [32,11,41,34,18,27], style transferring [40,12], inpainting [35,25,33] , and superresolution [22,7,39,20]. Since many of these tasks are ill-posed (many possible solutions exist for a given input), an ideal generator should be able to capture one-to-many Figure 1: Overview of our approach.…”
Section: Introductionmentioning
confidence: 99%