2021 IEEE International Conference on Image Processing (ICIP) 2021
DOI: 10.1109/icip42928.2021.9506254
|View full text |Cite
|
Sign up to set email alerts
|

Painting Style-Aware Manga Colorization Based On Generative Adversarial Networks

Abstract: Japanese comics (called manga) are traditionally created in monochrome format. In recent years, in addition to monochrome comics, full color comics, a more attractive medium, have appeared. Unfortunately, color comics require manual colorization, which incurs high labor costs. Although automatic colorization methods have been recently proposed, most of them are designed for illustrations, not for comics. Unlike illustrations, since comics are composed of many consecutive images, the painting style must be cons… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…We do not employ for the comparison the recently proposed approaches that strongly rely on user input [22], [24], [25] since they have the same problems with automatical colorization as Style2Paints. Moreover, we do not perform the comparison with recent methods [23], [26] that provide incremental improvements to [8] because those improvements is negligible regarding the domain gap that affects the models similarly to [8].…”
Section: Model Comparisonmentioning
confidence: 99%
“…We do not employ for the comparison the recently proposed approaches that strongly rely on user input [22], [24], [25] since they have the same problems with automatical colorization as Style2Paints. Moreover, we do not perform the comparison with recent methods [23], [26] that provide incremental improvements to [8] because those improvements is negligible regarding the domain gap that affects the models similarly to [8].…”
Section: Model Comparisonmentioning
confidence: 99%