2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00111
|View full text |Cite
|
Sign up to set email alerts
|

De-rendering Stylized Texts

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 32 publications
0
7
0
Order By: Relevance
“…Wu et al [7] proposed an end-to-end style retention network (SRNet) with text conversion, background inpainting, and fusion for editing text in the wild. Shimoda et al [8] suggested a text vectorization technique by leveraging the advantage of differentiable text rendering to accurately reproduce the input raster text in a resolutionfree parametric format. On the other hand, the printed character image generation [42]- [48] has been comprehensively studied.…”
Section: Text Image Generationmentioning
confidence: 99%
See 1 more Smart Citation
“…Wu et al [7] proposed an end-to-end style retention network (SRNet) with text conversion, background inpainting, and fusion for editing text in the wild. Shimoda et al [8] suggested a text vectorization technique by leveraging the advantage of differentiable text rendering to accurately reproduce the input raster text in a resolutionfree parametric format. On the other hand, the printed character image generation [42]- [48] has been comprehensively studied.…”
Section: Text Image Generationmentioning
confidence: 99%
“…Recently, neural network based style-guided approaches have been applied to printed [7], [8] and handwriting text image generation [1]- [3]. Equipped by the generative adversarial network (GAN) [9], style transfer [10]- [15] and imageto-image translation [16]- [19], the quality of synthetic text images has been improved greatly.…”
Section: Introductionmentioning
confidence: 99%
“…Taken a content image and a style image as inputs, text image editing (TIE) [45], [46], [47] aims to replace the text instance in the style image while retaining the styles of both the background and text. SRNet [45] first generated the foreground text by style transfer, then obtained the background image by text erasure, and finally synthesized the edited text image by fusing foreground and background.…”
Section: Text Image Editingmentioning
confidence: 99%
“…SwapText [46] further improved SRNet by manipulating geometric points of characters to transform text locations. Shimoda et al [47] formulated raster text editing as a de-rendering problem. They proposed a vectorization model to parse text information, and edited texts by a rendering engine using the parsed parameters.…”
Section: Text Image Editingmentioning
confidence: 99%
“…Besides, it can be widely used for document restoration in the field of intelligent education. It is also a crucial prerequisite step for text editing [49,52,53,20,37] and has wide applications in areas such as augmented reality translation. Recent text removal methods [56,23,48,38,40] have achieved significant improvements with the development of GAN [10,27,28].…”
Section: Introductionmentioning
confidence: 99%