2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8851926
|View full text |Cite
|
Sign up to set email alerts
|

Boosted GAN with Semantically Interpretable Information for Image Inpainting

Abstract: Image inpainting aims at restoring missing regions of corrupted images, which has many applications such as image restoration and object removal. However, current GANbased inpainting models fail to explicitly consider the semantic consistency between restored images and original images. For example, given a male image with image region of one eye missing, current models may restore it with a female eye. This is due to the ambiguity of GAN-based inpainting models: these models can generate many possible restora… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 37 publications
0
6
0
Order By: Relevance
“…The generative adversarial network (GAN) was introduced by Goodfellow et al [5] to produce realistic images under certain conditions. GANs have attracted substantial attention and has been studied in many tasks [17], such as image synthesis [5,13,14,21], text-to-image translation [22,36], and image inpainting [11,15,16]. In this work, we focus on talking head video generation with GAN guided by 3D facial depth maps learned without any ground-truth depths.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The generative adversarial network (GAN) was introduced by Goodfellow et al [5] to produce realistic images under certain conditions. GANs have attracted substantial attention and has been studied in many tasks [17], such as image synthesis [5,13,14,21], text-to-image translation [22,36], and image inpainting [11,15,16]. In this work, we focus on talking head video generation with GAN guided by 3D facial depth maps learned without any ground-truth depths.…”
Section: Related Workmentioning
confidence: 99%
“…For the optimization losses, we set λ P = 10, λ G =1, λ E = 10, and λ D = 10. We set the number of keypoints in DaGAN as 15 stage, we first train our face depth network using consecutive frames from videos in VoxCeleb1, and we fix it during the training of the whole deep generation framework.…”
Section: Implementation Detailsmentioning
confidence: 99%
“…Different from conventional image inpainting approaches, deep learning-based image inpainting models can generate more visually plausible details or fill large missing regions with new contents that never exist in the input image [Pathak et al, 2016;Iizuka et al, 2017;Yu et al, 2018;Liu et al, 2018;Li et al, 2019b;Liu et al, 2019;Li et al, 2019a;Li et al, 2020;Zeng et al, 2021]…”
Section: Image Inpaintingmentioning
confidence: 99%
“…These methods often suffer from low generation quality, especially when dealing with complicated scenes or large missing regions [8,19]. Image inpaint methods [8,14,15,16,19,21,27,30,31,32,33] with deep learning techniques have attracted wide attention. Pathak et al [19] introduce the Context Encoder (CE) model where a convolutional encoder-decoder network is trained with the combination of an adversarial loss [6] and a reconstruction loss.…”
Section: Image Inpaintingmentioning
confidence: 99%