2017 IEEE International Conference on Computer Vision Workshops (ICCVW) 2017
DOI: 10.1109/iccvw.2017.343
|View full text |Cite
|
Sign up to set email alerts
|

A Learned Representation of Artist-Specific Colourisation

Abstract: The colours used in a painting are determined by artists and the pigments at their disposal. Therefore

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…The upscaling procedure is identical for both branches; after several trials with different methodologies, such as transposed convolutions [51], bi-linear resizing and nearest-neighbor upsampling [52], we adopted a sub-pixel convolution layer as explained in detail in [53]. So, for either branch, the last 2D or 3D convolution generates s 2 •C features in order to produce the final tensors of shape sH × sW × C for the residual sum.…”
Section: A Network Architecturementioning
confidence: 99%
“…The upscaling procedure is identical for both branches; after several trials with different methodologies, such as transposed convolutions [51], bi-linear resizing and nearest-neighbor upsampling [52], we adopted a sub-pixel convolution layer as explained in detail in [53]. So, for either branch, the last 2D or 3D convolution generates s 2 •C features in order to produce the final tensors of shape sH × sW × C for the residual sum.…”
Section: A Network Architecturementioning
confidence: 99%
“…Besides enforcing the computation of the convolutional neural features in a perceptual space, the use of L*a*b* space serves the scope of rebalancing colors, so that they fit into the overall palette of the dataset. In order to improve the color coherence, we smooth the effect of L1 loss in GANs—that of filling in empty spaces with the mean colour of the gamut, which results in inaccurate and desaturated colors—by adding a color rebalancing loss similar to [ 28 , 31 ].…”
Section: Methodsmentioning
confidence: 99%
“…The full mathematical formulation for the color rebalancing loss can be followed in Equations ( 1 )–( 4 ). The parameter p was chosen as in [ 31 ]. The weight for the losses are as follows: .…”
Section: Methodsmentioning
confidence: 99%
“…However, some classes of water have small attenuation in shallow water, such as water type I, IA, IB about 1m to 5m, which have little effect on objects. Therefore we set different depth-ranges for different water classes: set the depth-range of water 9, 7 and 5 to [1,5], set the depth-range of water 3 and 1 to [1,15] and set the depth range of water I, IA, IB, II, III to [5,20]. At the same time, we select a random global veiling-light A c ∈ [0, 1].…”
Section: Dataset Constructionmentioning
confidence: 99%
“…In [14] Odena concats a one-hot class vector with the input vector, and maximize the the loglikelihood of the real image and the log-likelihood of the correct class. In [15] the conditional instance normalization was proposed to generate images in completely different styles by fedding the generator with class- In this paper, we propose a water class embedding block (WCEB) to constrain the feature space of every class of underwater images. As shown in Figure 2, we first encode the water class into a one-hot vector.…”
Section: A Water Class Embedding Blockmentioning
confidence: 99%