Proceedings of the 27th ACM International Conference on Multimedia 2019
DOI: 10.1145/3343031.3351002
|View full text |Cite
|
Sign up to set email alerts
|

Deep Fusion Network for Image Completion

Abstract: Figure 1: Comparison results between DFNet and previous state-of-the-art method Edge Connect [21]. In the first image of each group, white pixels represent the unknown region. With fusion blocks along with multi-scale constraints, DFNet has smoother transition (1st case), more natural texture (2nd case) and more consistent structure (3rd case). AbstractDeep image completion usually fails to harmonically blend the restored image into existing content, especially in the boundary area. This paper handles with thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
60
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 74 publications
(60 citation statements)
references
References 32 publications
0
60
0
Order By: Relevance
“…The deconvolution operations in GMCNN lead to the structure and color distortion. Although DFNet [38] shows a fine performance, it shows a lack of relevance between the hole and background regions such as symmetry of eyes, as shown in Figures 5e and 6f. The images completed by the proposed method PIC-EC and the state-of-the-art method EdgeConnect [23] are closer to the ground truth than images from other methods.…”
Section: Qualitative Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…The deconvolution operations in GMCNN lead to the structure and color distortion. Although DFNet [38] shows a fine performance, it shows a lack of relevance between the hole and background regions such as symmetry of eyes, as shown in Figures 5e and 6f. The images completed by the proposed method PIC-EC and the state-of-the-art method EdgeConnect [23] are closer to the ground truth than images from other methods.…”
Section: Qualitative Comparisonmentioning
confidence: 99%
“…Yi et al [37] propose the GMCNN (Generative Multi-column Convolutional Neural Networks) model to synthesize different image components in a parallel manner within one stage. Xin et al [38] use a fusion block to generate a flexible alpha composition map providing a smooth fusion and an attention map to make the network focus more on the unknown regions.…”
Section: Related Workmentioning
confidence: 99%
“…In [13], a model called Deep Fusion Network (DFNet) has been proposed to address nonharmonic region boundaries. DFNet is an U-Net architecture embedded with introduced fusion blocks which applied in a multi-scale fashion.…”
Section: B Learning Approachesmentioning
confidence: 99%
“…Different from the single image raindrop removal methods, which have the limitations on generalization capabilities, the image pairs, which are used to train the image inpainting task in a supervised learning paradigm, can be easily generated. It makes deep learning based image inpainting methods [10], [11] have million-level training datasets and work well even on the non-homologous images with pre-trained models. We therefore believe that the combination of light field image and image inpainting is helpful for raindrop removal task in general scenarios.…”
Section: Collected Imagementioning
confidence: 99%
“…Nazeri et al [10] proposed a two-stage adversarial model that synthesizes edges of missing regions by an edge generator, then inputs predicted edges and incomplete color image to another generator for final inpainting. Moreover, Hong et al [11] recently achieved good performance of image completion using a U-Net architecture embedded with fusion blocks to the last few decoder layers.…”
Section: B Image Inpaintingmentioning
confidence: 99%