2021
DOI: 10.1109/tcsvt.2020.3037688
|View full text |Cite
|
Sign up to set email alerts
|

SCGAN: Saliency Map-Guided Colorization With Generative Adversarial Network

Abstract: Given a grayscale photograph, the colorization system estimates a visually plausible colorful image. Conventional methods often use semantics to colorize grayscale images. However, in these methods, only classification semantic information is embedded, resulting in semantic confusion and color bleeding in the final colorized image. To address these issues, we propose a fully automatic Saliency Map-guided Colorization with Generative Adversarial Network (SCGAN) framework. It jointly predicts the colorization an… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 44 publications
(34 citation statements)
references
References 92 publications
(180 reference statements)
0
34
0
Order By: Relevance
“…For object recognition performance, Table 2 shows that our method performs better on PQ, SQ, RQ, PQ Th , SQ Th , RQ Th than other image-level image translation methods (i.e., pix2pix [9], MUNIT [7] , TIC-CGAN [19] and BicycleGAN [43]). Although the object-level method (i.e., SCGAN [40]) achieved the best scores in PQ, SQ, RQ, and PQ St , SQ St , RQ St , our method performed best in the PQ Th , SQ Th , RQ Th . This shows that our method can finely separate the unclear object boundary problem in a complex scene with multiple discrepant objects caused by the overlap of things and stuff.…”
Section: Quantitative Resultsmentioning
confidence: 83%
See 3 more Smart Citations
“…For object recognition performance, Table 2 shows that our method performs better on PQ, SQ, RQ, PQ Th , SQ Th , RQ Th than other image-level image translation methods (i.e., pix2pix [9], MUNIT [7] , TIC-CGAN [19] and BicycleGAN [43]). Although the object-level method (i.e., SCGAN [40]) achieved the best scores in PQ, SQ, RQ, and PQ St , SQ St , RQ St , our method performed best in the PQ Th , SQ Th , RQ Th . This shows that our method can finely separate the unclear object boundary problem in a complex scene with multiple discrepant objects caused by the overlap of things and stuff.…”
Section: Quantitative Resultsmentioning
confidence: 83%
“…As shown in Figure 7 and Figure 8, we added more experimental results compared our method with various I2I translation models including pix2pix [9], MUNIT [7], TIC-CGAN [19], BicycleGAN [43], and SCGAN [40]. The results show the evaluation of image quality and object recognition performance for the translated images on the thermal-to-color translation task using different methods respectively.…”
Section: Additional Experiments Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Neural networks are used since they have also been shown to mirror the behavior and the neuronal architecture of the early primate visual system [ 8 ]. In fact, with the evolution of neural networks, they have been more and more used for this purpose and, for instance, Generative Adversarial Networks (GANs) are used in [ 11 ] to color salciency maps.…”
Section: Technical Background and Related Workmentioning
confidence: 99%