2019
DOI: 10.1049/iet-ipr.2018.5592
|View full text |Cite
|
Sign up to set email alerts
|

Fast generative adversarial networks model for masked image restoration

Abstract: The conventional masked image restoration algorithms all utilise the correlation between the masked region and its neighbouring pixels, which does not work well for the larger masked image. The latest research utilises Generative Adversarial Networks (GANs) model to generate a better result for the larger masked image but does not work well for the complex masked region. To get a better result for the complex masked region, the authors propose a novel fast GANs model for masked image restoration. The method us… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…For the convenience of calculation, we assume the mathematical difference q (between visible watermarked image and real image) which is from 0.01 to 1. Inspired by [11], we can get the pixel difference calculated by the GANs model: yGANs=log)(q×0.1log)(1q×0.1+1.To the same, we will get the pixel difference concerning the DCGAN model: yDCGAN=yGANs.Inspired by [11], we can get the pixel difference calculated by the WGAN model: yWGAN=q.To the same, we will get the pixel difference concerning the pix2pix model: ypix2pix=yWGAN.Inspired by [11], we can get the pixel difference calculated by the LSGAN model: yLSGAN=q2.According to (4), the pixel difference calculated by the VWGAN model can be defined as follows: yVWGAN=2qq2The visible watermarked images and the real images are very similar in most condition. In other words, the mathematical differences are minimal in most condition.…”
Section: Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the convenience of calculation, we assume the mathematical difference q (between visible watermarked image and real image) which is from 0.01 to 1. Inspired by [11], we can get the pixel difference calculated by the GANs model: yGANs=log)(q×0.1log)(1q×0.1+1.To the same, we will get the pixel difference concerning the DCGAN model: yDCGAN=yGANs.Inspired by [11], we can get the pixel difference calculated by the WGAN model: yWGAN=q.To the same, we will get the pixel difference concerning the pix2pix model: ypix2pix=yWGAN.Inspired by [11], we can get the pixel difference calculated by the LSGAN model: yLSGAN=q2.According to (4), the pixel difference calculated by the VWGAN model can be defined as follows: yVWGAN=2qq2The visible watermarked images and the real images are very similar in most condition. In other words, the mathematical differences are minimal in most condition.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…As they are unable to locate the watermarks they cannot be used for watermark removal. Recently, a novel fast GANs model [11] for complex masked image restoration was proposed. Its neighbouring network requires the location of the watermark to remove the watermark to the greatest extent.…”
Section: Related Workmentioning
confidence: 99%
“…For example, we introduce two parsing networks for our MRGAN model to find more parsing differences between the generated image and the real image. The model most similar to our model is the fast GANs model [23]. Although the fast GANs model proposes a novel neighbouring module compared with our model, it focuses on the restoration of local masked images.…”
Section: Related Workmentioning
confidence: 99%
“…Different from the FACEGAN model, the parsing networks are used to compare the similarity of two images, not to enhance image quality by using the pre‐trained VGG19 [26] model. Different from the fast GANs model [23], for global mosaic removal, we compare image difference in three convolutional layers and all the convolutional layers. Inspired by [27, 28], features are extracted on convolutional layers conv3‐4, conv4‐4, and conv5‐4.…”
Section: Mrgan Model Designmentioning
confidence: 99%
“…Generative adversarial networks (GAN) [20] is one of the most promising deep learning methods in complex distribution in recent years. It is widely used in the field of computer vision [21][22][23][24]. The shadow removal method based on GAN has achieved good results recently [25].…”
Section: Introductionmentioning
confidence: 99%