2021
DOI: 10.1016/j.knosys.2021.106789
|View full text |Cite
|
Sign up to set email alerts
|

Efficient texture-aware multi-GAN for image inpainting

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(8 citation statements)
references
References 30 publications
0
8
0
Order By: Relevance
“…Jam et al 17 proposed using the Wasserstein-perceptual loss function to preserve the image color and maintain the realism of the restored image. Pertinently, Zhang et al 18 independently proposed the WGAN-GP, which was introduced into the global D and local D. Building upon previous work; Hedjazi and Genc 19 proposed optimizing the parameters of four progressively efficient generators and Ds in an end-to-end training approach. Xu et al 20 proposed generating adversarial strategies using reconstructive sampling and multiple granularities.…”
Section: Gan-based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Jam et al 17 proposed using the Wasserstein-perceptual loss function to preserve the image color and maintain the realism of the restored image. Pertinently, Zhang et al 18 independently proposed the WGAN-GP, which was introduced into the global D and local D. Building upon previous work; Hedjazi and Genc 19 proposed optimizing the parameters of four progressively efficient generators and Ds in an end-to-end training approach. Xu et al 20 proposed generating adversarial strategies using reconstructive sampling and multiple granularities.…”
Section: Gan-based Methodsmentioning
confidence: 99%
“…Pertinently, Zhang et al 18 . independently proposed the WGAN-GP, which was introduced into the global D and local D. Building upon previous work; Hedjazi and Genc 19 proposed optimizing the parameters of four progressively efficient generators and Ds in an end-to-end training approach. Xu et al 20 .…”
Section: Related Workmentioning
confidence: 99%
“…For text and image modalities, the diffusion model [ 26 – 28 ] can learn the denoising process while allowing conditional guidance to flexibly adapt to the semantic reconstruction. The audio and haptic modalities cannot be processed directly so that the authors propose GANs [ 20 , 29 , 30 ] to reconstruct their spectrum into new signals, quickly [ 19 , 31 , 32 ]. Hence, generative AI improves efficiency of semantic codec [ 33 – 35 ], accuracy of semantic transmission, and creativity of semantic reconstruction.…”
Section: Introductionmentioning
confidence: 99%
“…Among them, the method based on Generative Adversarial Nets [3] (GANs) had become the mainstream in the field of image repair [4]. e method based on GANs transforms the image repair problem into a condition-based generation confrontation [5,6]. Such methods usually take the damaged image and the mask of the calibrated damaged area as the conditional input, use the autoencoder network as the generator to reconstruct the content of the damaged area, and combine the discriminator network to counteract training, and finally get a complete image output [7].…”
Section: Introductionmentioning
confidence: 99%