2019
DOI: 10.1007/978-981-13-6473-0_17
|View full text |Cite
|
Sign up to set email alerts
|

Anime Sketch Coloring with Swish-Gated Residual U-Net

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…In this work, we implement the Context Generation Network as a 3D variant of the U-Net architecture which was originally proposed by Ronneberger et al [56]. U-Net has successfully been applied for different computer vision tasks that involve dense localized predictions such as image style transfer [57], image segmentation [58,59], image enhancement [60], image coloring [61,62], and image generation [63,64] Different from the original U-Net architecture, we replace the 2D convolutions with 3D counterparts and utilize residue blocks instead of individual convolution layers for better training properties of the network. The U-Net comprises of a contractive part followed by an expansive part.…”
Section: Context Generation Networkmentioning
confidence: 99%
“…In this work, we implement the Context Generation Network as a 3D variant of the U-Net architecture which was originally proposed by Ronneberger et al [56]. U-Net has successfully been applied for different computer vision tasks that involve dense localized predictions such as image style transfer [57], image segmentation [58,59], image enhancement [60], image coloring [61,62], and image generation [63,64] Different from the original U-Net architecture, we replace the 2D convolutions with 3D counterparts and utilize residue blocks instead of individual convolution layers for better training properties of the network. The U-Net comprises of a contractive part followed by an expansive part.…”
Section: Context Generation Networkmentioning
confidence: 99%