2017
DOI: 10.48550/arxiv.1707.06873
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Semantic Image Synthesis via Adversarial Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 0 publications
0
10
0
Order By: Relevance
“…[205,206,207]. These can also be used for biological image synthesis [208,209] and text-to-image synthesis [210,211,212]. 36 Recently, a group of researchers from NVIDIA, MGH & BWH Center for Clinical Data Science in Boston, and the Mayo Clinic in Rochester [213] designed a clever approach to generate synthetic abnormal MRI images with brain tumors by training a GAN based on pix2pix 37 using two publicly available data sets of brain MRI (ADNI and the BRATS'15 Challenge, and later also the Ischemic Stroke Lesion Segmentation ISLES'2018 Challenge).…”
Section: Image Synthesismentioning
confidence: 99%
“…[205,206,207]. These can also be used for biological image synthesis [208,209] and text-to-image synthesis [210,211,212]. 36 Recently, a group of researchers from NVIDIA, MGH & BWH Center for Clinical Data Science in Boston, and the Mayo Clinic in Rochester [213] designed a clever approach to generate synthetic abnormal MRI images with brain tumors by training a GAN based on pix2pix 37 using two publicly available data sets of brain MRI (ADNI and the BRATS'15 Challenge, and later also the Ischemic Stroke Lesion Segmentation ISLES'2018 Challenge).…”
Section: Image Synthesismentioning
confidence: 99%
“…Comparatively, manipulation is an advanced form of generation, where besides understanding text, learning image semantics is compulsory to know the exact location of modification. Currently, image manipulation from GAN models is studied under different variations, from global [226] to local [213,215,221], directly from text [208,210,211,214,216] or with additional supervision [218,219,222,225], and from the latent space of GAN models [223,229].…”
Section: Supervised T2imentioning
confidence: 99%
“…The first study to purely explore image manipulation from GAN was by Dong et al [208]. They used a conditional GAN, following [7], where the generator encodes the input image to features and concatenates it with text semantics to decode the combined representation.…”
Section: Supervised T2imentioning
confidence: 99%
“…Meanwhile, there are also many works that have added condition information to the generative network and the discriminative network for conditional image synthesis. The condition information could be a discrete label [26,9,4,21,28], a reference image [24,10,40,8,6], or even a text sentence [31,41,10].…”
Section: Related Workmentioning
confidence: 99%