2019
DOI: 10.48550/arxiv.1909.04686
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Disentangled Image Matting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 33 publications
0
7
0
Order By: Relevance
“…Our method achieves new state-of-the-arts on AIM, AlphaMatting benchmarks and produce impressive visual results on real-world high-resolution images. [1], SampleNet [35], GCA Matting [24] and HDMatt (Ours).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Our method achieves new state-of-the-arts on AIM, AlphaMatting benchmarks and produce impressive visual results on real-world high-resolution images. [1], SampleNet [35], GCA Matting [24] and HDMatt (Ours).…”
Section: Discussionmentioning
confidence: 99%
“…On the first stage, we pre-trained a Resnet-34 classification model on ImageNet [10]. We follow the same training configuration as the public PyTorch implementation 1 . Then all layers from the model before the fully-connected layer were used as our matting encoder.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Given an input image and a trimap indicating the background, forground and unknown regions, image matting is applied to estimate the alpha matte inside the unknown region to clearly separate the foreground from the background. Recently, many deep-learning-based methods (Xu et al 2017;Lu et al 2019;Hou and Liu 2019;Cai et al 2019) have achieved significant improvements over traditional methods (Wang and Cohen 2007;Gastal and Oliveira 2010;Sun et al 2004;Levin, Lischinski, and Weiss 2007;Grady et al 2005). These deep learning methods (Xu et al 2017;Lu et al 2019;Hou and Liu 2019) mostly take the whole images and the associated whole trimaps as the inputs, and employ deep neural networks such as VGG (Simonyan and Zisserman 2014) and Xception (Chollet 2017) as their network backbones.…”
Section: Introductionmentioning
confidence: 99%