2019
DOI: 10.1109/tmm.2019.2920613
|View full text |Cite
|
Sign up to set email alerts
|

BranchGAN: Unsupervised Mutual Image-to-Image Transfer With A Single Encoder and Dual Decoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(12 citation statements)
references
References 26 publications
0
11
0
1
Order By: Relevance
“…In this paper, we considerably improve the performance of the attention-based encoder-decoder method on image-based table recognition with a novel EDD architecture. Our model differs from other existing EDD architectures [32], [33], where the dual decoders are independent from each other. In our model, the cell decoder is triggered only when the structure decoder generates a new cell.…”
Section: Related Workmentioning
confidence: 99%
“…In this paper, we considerably improve the performance of the attention-based encoder-decoder method on image-based table recognition with a novel EDD architecture. Our model differs from other existing EDD architectures [32], [33], where the dual decoders are independent from each other. In our model, the cell decoder is triggered only when the structure decoder generates a new cell.…”
Section: Related Workmentioning
confidence: 99%
“…Some methods , e.g. Pix2Pix, 36 AutoGAN 37 and others 38,39 have reported remarkably good style transfer results.…”
Section: Deep Learning-based Photorealistic Style Transfer Methodsmentioning
confidence: 99%
“…Unlike DIRT++, GMM-UNIT entailed a proposal of a continuous domain encoding that allowed generating images with zero-or few-shot generation. Zhou et al [91] proposed an unsupervised mutual image-to-image translation model, called BranchGAN, based on a single-encoder-dual-decoder architecture for two domains. BranchGAN transfers one image from one domain to another domain by exploiting the shared distribution of the two domains with the same encoder.…”
Section: Unsupervised Translation With Autoencoder-based Modelsmentioning
confidence: 99%