2018
DOI: 10.1007/978-3-030-01252-6_6
|View full text |Cite
|
Sign up to set email alerts
|

Image Inpainting for Irregular Holes Using Partial Convolutions

Abstract: Fig. 1. Masked images and corresponding inpainted results using our partialconvolution based network.Abstract. Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value). This often leads to artifacts such as color discrepancy and blurriness. Postprocessing is usually used to reduce such artifacts, but are… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

7
1,796
0
5

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 1,548 publications
(1,814 citation statements)
references
References 39 publications
7
1,796
0
5
Order By: Relevance
“…Deep learning methods present as a powerful tool to address these limitations; deep learning models have been widely successful for denoising speech 16 , and for inpainting (filling missing data) in images 17,18 . An earlier study 19 demonstrated that simple convolutional neural networks are successful at denoising or peak calling from ChIP-seq data.…”
Section: Discussionmentioning
confidence: 99%
“…Deep learning methods present as a powerful tool to address these limitations; deep learning models have been widely successful for denoising speech 16 , and for inpainting (filling missing data) in images 17,18 . An earlier study 19 demonstrated that simple convolutional neural networks are successful at denoising or peak calling from ChIP-seq data.…”
Section: Discussionmentioning
confidence: 99%
“…We only take the feature map from the first convolutional layer for perceptual loss and style loss, since the network is trained for natural images and the higher-level features are not applicable to our context. The same weighting for loss terms is used as in [7]. The baseline MRI model was trained for 4,000 epochs using a mini-batch of eight 128 × 128 training images.…”
Section: Baseline Mri Modelmentioning
confidence: 99%
“…turned off as suggested in [7]. Common image augmentations, including shifting, left-right flipping, and gray value variations [10], were applied.…”
Section: Baseline Mri Modelmentioning
confidence: 99%
“…There is likely room to improve our work here, particularly in exploring further the potential of batch augmentation [5], in developing better saliency-based approaches to occlusion augmentation, and in elucidating further the interaction between and impact of dataset and model complexity for effective occlusion augmentation. Further research could also be done on other kinds of occlusions, such as blur or random noise or even ignoring regions [9]. In conclusion, in contrast to other regularization techniques that require architectural changes, we present a simple paradigm for making occlusions effective on ImageNet for sufficiently capable models (e.g., ResNet50) that can be easily added into existing training paradigms.…”
Section: Resultsmentioning
confidence: 99%