Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00774
|View full text |Cite
|
Sign up to set email alerts
|

Few-Shot Font Generation by Learning Fine-Grained Local Styles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(15 citation statements)
references
References 20 publications
0
10
0
Order By: Relevance
“…We compare our method with seven state-of-the-art methods: one image-to-image translation method (FUNIT (Liu et al 2019)) and six Chinese font generation methods (LF-Font (Park et al 2021a), MX-Font (Park et al 2021b), DG-Font (Xie et al 2021), CG-GAN (Kong et al 2022), Fs-Font (Tang et al 2022), and CF-Font (Wang et al 2023) 18.67 .4823 .1688.3400 DG-Font 19.81 .4532 .2047.3646 MX-Font 9.32 .4605 .1603.3571 Fs-Font 31.40 .4270 .2160.3855 CG-GAN 7.72 .4655 .1721.3544 CF-Font 14.20 .4396 .2139.3713 Diff-Font 12.08 .4192 .2022.3877 Ours 7.67 .4942 .1426 Table 2: Quantitative Results on UFSC. we use the font of Song as the source, and all methods are trained based on their official codes.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We compare our method with seven state-of-the-art methods: one image-to-image translation method (FUNIT (Liu et al 2019)) and six Chinese font generation methods (LF-Font (Park et al 2021a), MX-Font (Park et al 2021b), DG-Font (Xie et al 2021), CG-GAN (Kong et al 2022), Fs-Font (Tang et al 2022), and CF-Font (Wang et al 2023) 18.67 .4823 .1688.3400 DG-Font 19.81 .4532 .2047.3646 MX-Font 9.32 .4605 .1603.3571 Fs-Font 31.40 .4270 .2160.3855 CG-GAN 7.72 .4655 .1721.3544 CF-Font 14.20 .4396 .2139.3713 Diff-Font 12.08 .4192 .2022.3877 Ours 7.67 .4942 .1426 Table 2: Quantitative Results on UFSC. we use the font of Song as the source, and all methods are trained based on their official codes.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…Although these methods have achieved remarkable success in font generation, they still suffer from complex character generation and large style variation transfer, leading to severe stroke missing, artifacts, blurriness, layout errors, and style inconsistency as shown in Figure 1(b)(c). Retrospectively, most font generation approaches (Park et al 2021a,b;Xie et al 2021;Tang et al 2022;Liu et al 2022;Kong et al 2022;Wang et al 2023) adopt a GANbased (Goodfellow et al 2014) framework which potentially suffers from unstable training due to their adversarial training nature. Moreover, most of these methods perceive content information through only single-scale highlevel features, omitting the fine-grained details that are crucial to preserving the source content, especially for complex characters.…”
Section: Introductionmentioning
confidence: 99%
“…Based on the same components in characters, Park et al proposed LF-Font (Park et al 2020) and MX-Font (Park et al 2021), establishing a relation between the local features and components. Similarly, Tang et al (2022) developed an algorithm to search more suitable reference sets based on the prior knowledge of the existence of repeating components in FS-Font. Although these existing methods can synthesize visually-pleasing glyph images, they typically fail to synthesize satisfactory calligraphy fonts, which do not meet the consistency assumptions in prior knowledge.…”
Section: Glyph Image Synthesismentioning
confidence: 99%
“…To improve the quality of the generated images, [13] proposed the Deep Feature Similarity (DFS) architecture to leverage the feature similarity between the input content and style images to synthesize target images. Recently, researchers [9,19,20,[44][45][46] have made significant progress by exploiting the compositionality of compositional scripts. However, our experimental results indicate poor performance for the constructed multi-language dataset.…”
Section: Font Generationmentioning
confidence: 99%