2018
DOI: 10.1145/3197517.3201365
|View full text |Cite
|
Sign up to set email alerts
|

Abstract: Figure 1: Colorization results of black-and-white photographs. Our method provides the capability of generating multiple plausible colorizations by giving different references. Input images (from left to right, top to bottom): Leroy Skalstad/pixabay, Peter van der Sluijs/wikimedia, AbstractWe propose the first deep learning approach for exemplar-based local colorization. Given a reference color image, our convolutional neural network directly maps a grayscale image to an output colorized image. Rather than usi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
234
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
3
1

Relationship

2
8

Authors

Journals

citations
Cited by 215 publications
(237 citation statements)
references
References 53 publications
2
234
1
Order By: Relevance
“…We further apply the proposed network to the image deraining task, which can also obtain and state-of-the-art performance. In the future, we will try more facy losses used in [6,19] and consider to extend to video dehazing like [5].…”
Section: Resultsmentioning
confidence: 99%
“…We further apply the proposed network to the image deraining task, which can also obtain and state-of-the-art performance. In the future, we will try more facy losses used in [6,19] and consider to extend to video dehazing like [5].…”
Section: Resultsmentioning
confidence: 99%
“…[LMS16] do not require any additional input. We feed the same reference images into the networks [ZZI*17, HCL*18] and our network.…”
Section: Resultsmentioning
confidence: 99%
“…We compare our method against recent learning based image colorization methods both quantitatively and qualitatively. The baseline methods include three automatic colorization methods (Iizuka et al [15], Larsson et al [16] and Zhang et al [17]) and one exemplar based method (He and Chen et al [30]) since these methods are regarded as state-of-the-art. For the quantitative comparison, we test these methods on 10k subset of the ImageNet dataset, as shown in Table 1.…”
Section: Comparisonsmentioning
confidence: 99%