2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00229
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Real Image Super-Resolution via Generative Variational AutoEncoder

Abstract: Benefited from the deep learning, image Super-Resolution has been one of the most developing research fields in computer vision. Depending upon whether using a discriminator or not, a deep convolutional neural network can provide an image with high fidelity or better perceptual quality. Due to the lack of ground truth images in real life, people prefer a photo-realistic image with low fidelity to a blurry image with high fidelity. In this paper, we revisit the classic example based image super-resolution appro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
2

Relationship

2
8

Authors

Journals

citations
Cited by 32 publications
(23 citation statements)
references
References 32 publications
1
22
0
Order By: Relevance
“…To better evaluate the visual quality of view rendering than just visualizing the generated images, we used LPIPS [54] to measure the deep feature similarity. It is widely used in image processing tasks [55], [56], [57]. Using HRNet [53], we also estimated the semantic segmentation map from the generated image to compare with the ground truth by averaging pixel accuracy (%).…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…To better evaluate the visual quality of view rendering than just visualizing the generated images, we used LPIPS [54] to measure the deep feature similarity. It is widely used in image processing tasks [55], [56], [57]. Using HRNet [53], we also estimated the semantic segmentation map from the generated image to compare with the ground truth by averaging pixel accuracy (%).…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…Instead, we embed the variation module into the transformation model to map features into a linear space spanned by mixture models. VAE is defined as: P (X)= P (X, z)P (z|X) dz, where X is the input and z is sampled from the latent space Z [24,26,27]. To regularize the latent space, we use the Kullback-Leibler (KL) divergence that measures the probability close to a normal distribution.…”
Section: Variation Modulementioning
confidence: 99%
“…4). Recently, there are some new attempts for blind SR. Several CycleGAN [47] based methods [4,40,21,20] learn from unpaired LR-HR images, but they are more difficult to train. ZSSR [29] explores the zero-shot solution for the first time, where the CNN learns the mapping from the LR image and its downscaled versions (self-supervision).…”
Section: Introductionmentioning
confidence: 99%