2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00126
|View full text |Cite
|
Sign up to set email alerts
|

Interactive Sketch & Fill: Multiclass Sketch-to-Image Translation

Abstract: Fried chicken Cupcake Pineapple Strawberry Moon Cookie Orange Watermelon Soccer Basketball Class-conditioned Outline-to-Image Translation Interactive Sketch & FillFigure 1: (Top) Given a user created incomplete object outline (first row), our model estimates the complete shape and provides this as a recommendation to the user (shown in gray), along with the final synthesized object (second row). These estimates are updated as the user adds (green) or removes (red) strokes over time -previous edits are shown in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
73
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 114 publications
(75 citation statements)
references
References 38 publications
0
73
0
Order By: Relevance
“…Interactive image search aims to incorporate user feedback as an interactive signal to navigate the visual search. In general, the user interaction can be given in various formats, including relative attribute [45,28,75], attribute [79,18,2], attribute-like modification text [66], natural language [16,17], spatial layout [37], and sketch [76,74,14]. As text is the most pervasive interaction between human and computer in contemporary search engines, it naturally serves to convey concrete information that elaborates user's intricate specification for image search.…”
Section: Related Workmentioning
confidence: 99%
“…Interactive image search aims to incorporate user feedback as an interactive signal to navigate the visual search. In general, the user interaction can be given in various formats, including relative attribute [45,28,75], attribute [79,18,2], attribute-like modification text [66], natural language [16,17], spatial layout [37], and sketch [76,74,14]. As text is the most pervasive interaction between human and computer in contemporary search engines, it naturally serves to convey concrete information that elaborates user's intricate specification for image search.…”
Section: Related Workmentioning
confidence: 99%
“…GANs [12] have made a great success in image synthesis [5,18,19]. Conditional GANs synthesize images based on given conditions, which can be class labels [27], text [18], edges [11,16], or semantic segmentation maps [30,39]. Isola et al show the power of conditional GANs on generating images given dense condition signals including sketches and segmentation maps [16].…”
Section: Related Work 21 Conditional Gansmentioning
confidence: 99%
“…Generative models like Variational Autoencoders (VAEs) [21] and Generative Adversarial Networks (GANs) [12] have made great progress on modeling the distribution of natural images in a generative way. With additional signals such as class labels [27], text [44], edges [11], or segmentation maps [30,39] as input, conditional generative models can generate photorealistic samples in a controllable manner, which is useful in a number of multimedia applications such as interactive design [10,11,30] and artistic style transfer [8,43].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…An interesting next pursuit would be to see if computers can mimic creative processes such as those used by painters in making pictures, or assisting artists or architects in making artistic or architectural designs. In fact, in the past decade, we have witnessed advances in systems that synthesize an image from a text description [1][2][3][4] or from a learned style of content [5], paint a picture given a sketch [6][7][8][9], render a photorealistic scene from a wireframe [10,11], and create virtual reality content from images and videos [12], among others. A comprehensive review of such systems can explain the current state-of-the-art in such pursuits, reveal open challenges, and illuminate future directions.…”
Section: Introductionmentioning
confidence: 99%