2019 16th Conference on Computer and Robot Vision (CRV) 2019
DOI: 10.1109/crv.2019.00033
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Temporally Coherent Video Colorization

Abstract: Greyscale image colorization for applications in image restoration has seen significant improvements in recent years. Many of these techniques that use learning-based methods struggle to effectively colorize sparse inputs. With the consistent growth of the anime industry, the ability to colorize sparse input such as line art can reduce significant cost and redundant work for production studios by eliminating the in-between frame colorization process. Simply using existing methods yields inconsistent colors bet… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
46
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(46 citation statements)
references
References 24 publications
(31 reference statements)
0
46
0
Order By: Relevance
“…Results in Table 1 obtained from an existing baseline model trained on the Dragonball dataset with the previous ground truth frame as a condition in [15], still suffers from the flicker effect due to inconsistencies in color between subsequent frames. In addition, unfamiliar backgrounds and characters suffered the most as the model colored them differently for each frame.…”
Section: Discussionmentioning
confidence: 99%
“…Results in Table 1 obtained from an existing baseline model trained on the Dragonball dataset with the previous ground truth frame as a condition in [15], still suffers from the flicker effect due to inconsistencies in color between subsequent frames. In addition, unfamiliar backgrounds and characters suffered the most as the model colored them differently for each frame.…”
Section: Discussionmentioning
confidence: 99%
“…Network Architecture. We adopt the GAN based architecture for our HDR frame reconstruction proposed by [47]. The proposed generator network is based on encoder-decoder architecture, where the encoder first downsamples the image twice (𝐻 ×𝑊 → 𝐻 /4 ×𝑊 /4).…”
Section: Gan Based Hdr Frame Generationmentioning
confidence: 99%
“…The feature map is then passed through 8 Res-Blocks followed by two upsampling layers. Similar to [47], we use the instance norm layer. We warp a convolution layer, an instance-norm layer, and a ReLU activation layer into one basic unit.…”
Section: Gan Based Hdr Frame Generationmentioning
confidence: 99%
See 1 more Smart Citation
“…Most of these applications involve image processing. Although there have been some studies involving video processing, such as video generation [115], video colorization [116], [117], video inpainting [118], motion transfer [119], and facial animation synthesis [120]- [123], the research on video using GANs is limited. In addition, although GANs have been applied to the generation and synthesis of 3D models, such as 3D colorization [124], 3D face reconstruction [125], [126], 3D character animation [127], and 3D textured object generation [128], the results are far from perfect.…”
Section: B Future Opportunitiesmentioning
confidence: 99%