2023
DOI: 10.54364/aaiml.2023.1171
|View full text |Cite
|
Sign up to set email alerts
|

InkGAN: Generative Adversarial Networks for Ink-And-Wash Style Transfer of Photographs

Abstract: In this work, we present a novel approach for Chinese Ink-and-Wash style transfer using a GAN structure. The proposed method incorporates a specially designed smooth loss tailored for this style transfer task, and an end-to-end framework that seamlessly integrates various components for efficient and effective image style transferring. To demonstrate the superiority of our approach, comparative results against other popular style transfer methods such as CycleGAN is presented. The experimentation showcased the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 20 publications
(24 reference statements)
0
3
0
Order By: Relevance
“…Nevertheless, the auto-encoder architecture is unsuitable for diffusion models. Zuo et al [20] introduce the style scaling injection module and the style degree interpretation module to the existing GAN-based style transfer methods [29,30]. This allows for controlling the degree of image stylization through fine-tuning the network with datasets.…”
Section: Style Transfermentioning
confidence: 99%
“…Nevertheless, the auto-encoder architecture is unsuitable for diffusion models. Zuo et al [20] introduce the style scaling injection module and the style degree interpretation module to the existing GAN-based style transfer methods [29,30]. This allows for controlling the degree of image stylization through fine-tuning the network with datasets.…”
Section: Style Transfermentioning
confidence: 99%
“…Some other relevant works in this category are (Gygli et al, 2014), (Fu et al, 2017) -which uses the chat data as supplementary information to determine the location of highlights -and (Song, 2016) -which used the games special effects as a marker for the location of video highlights. Some inspiring work like (Yu et al, 2023) gives insights from another perspective for image treatment.…”
Section: Video Modalitymentioning
confidence: 99%
“…In (Ford et al, 2017) it was shown that online chat language exhibits unique linguistic patterns or rules that are "short", "sudden", or even "convulsive", in addition to the irregular use of words and non-words (emojis, acronyms, internet slang and jargons) that don't conform to canonical English grammar. This presents a serious challenge for NLP neural network models (Zhang et al, 2021;Chen et al, 2023;Yang et al, 2023) that are trained on clean English language.…”
Section: Text Modalitymentioning
confidence: 99%