Handwriting of Chinese has long been an important skill in East Asia. However, automatic generation of handwritten Chinese characters poses a great challenge due to the large number of characters. Various machine learning techniques have been used to recognize Chinese characters, but few works have studied the handwritten Chinese character generation problem, especially with unpaired training data. In this work, we formulate the Chinese handwritten character generation as a problem that learns a mapping from an existing printed font to a personalized handwritten style. We further propose DenseNet CycleGAN to generate Chinese handwritten characters. Our method is applied not only to commonly used Chinese characters but also to calligraphy work with aesthetic values. Furthermore, we propose content accuracy and style discrepancy as the evaluation metrics to assess the quality of the handwritten characters generated. We then use our proposed metrics to evaluate the generated characters from CASIA dataset as well as our newly introduced Lanting calligraphy dataset.
Input Black no smile Blond Brown No Smile smile male female + no smile + male + male + no smile + male + smile + female + female + smile + female Brown Hair Gender Expression Smile Brown Hair Brown Hair Hair Color ModularGAN ArchitectureFig. 1. ModularGAN: Results of proposed modular generative adversarial network illustrated on multi-domain image-to-image translation task on the CelebA [1] dataset.Abstract. Existing methods for multi-domain image-to-image translation (or generation) attempt to directly map an input image (or a random vector) to an image in one of the output domains. However, most existing methods have limited scalability and robustness, since they require building independent models for each pair of domains in question. This leads to two significant shortcomings: (1) the need to train exponential number of pairwise models, and (2) the inability to leverage data from other domains when training a particular pairwise mapping. Inspired by recent work on module networks [2], this paper proposes ModularGAN for multi-domain image generation and image-to-image translation. Mod-ularGAN consists of several reusable and composable modules that carry on different functions (e.g., encoding, decoding, transformations). These modules can be trained simultaneously, leveraging data from all domains, and then combined to construct specific GAN networks at test time, according to the specific image translation task. This leads to Modu-larGAN's superior flexibility of generating (or translating to) an image in any desired domain. Experimental results demonstrate that our model not only presents compelling perceptual results but also outperforms state-of-the-art methods on multi-domain facial attribute transfer.
Recently, deep residual networks have been successfully applied in many computer vision and natural language processing tasks, pushing the state-of-the-art performance with deeper and wider architectures. In this work, we interpret deep residual networks as ordinary differential equations (ODEs), which have long been studied in mathematics and physics with rich theoretical and empirical success. From this interpretation, we develop a theoretical framework on stability and reversibility of deep neural networks, and derive three reversible neural network architectures that can go arbitrarily deep in theory. The reversibility property allows a memory-efficient implementation, which does not need to store the activations for most hidden layers. Together with the stability of our architectures, this enables training deeper networks using only modest computational resources. We provide both theoretical analyses and empirical results. Experimental results demonstrate the efficacy of our architectures against several strong baselines on CIFAR-10, CIFAR-100 and STL-10 with superior or on-par state-of-the-art performance. Furthermore, we show our architectures yield superior results when trained using fewer training data.
arXiv:1810.04511v2 [cs.CV] 3 Jun 2019 frame 1 frame 2 frame n CNN CNN CNN Spatial Attention Spatial Attention Spatial Attention Temporal Attention Temporal Attention Temporal Attention Convolution LSTM Convolution LSTM Convolution LSTM AVG Playing Volleyball X1 < l a t e x i t s h a 1 _ b a s e 6 4 = " B G + 8 X 3 r Z G o V 3 v z 6 / F F z d q k I g 7 b M = " > A A A B 6 n i c b Z B N S 8 N A E I Y n 9 a v W r 6 p H L 4 t F 8 F Q S E f Q k B S 8 e K 9 o P a E P Z b D f t 0 s 0 m 7 E 6 E E v o T v H h Q x K u / y J v / x m 2 a g 7 a + s P D w z g w 7 8 w a J F A Z d 9 9 s p r a 1 v b G 6 V t y s 7 u 3 v 7 B 9 X D o 7 a J U 8 1 4 i 8 U y 1 t 2 A G i 6 F 4 i 0 U K H k 3 0 Z x G g e S d Y H I 7 r 3 e e u D Y i V o 8 4 T b g f 0 Z E S o W A U r f X Q H X i D a s 2 t u 7 n I K n g F 1 K B Q c 1 D 9 6 g 9 j l k Z c I Z P U m J 7 n J u h n V K N g k s 8 q / d T w h L I J H f G e R U U j b v w s X 3 V G z q w z J G G s 7 V N I c v f 3 R E Y j Y 6 Z R Y D s j i m O z X J u b / 9 V 6 K Y b X f i Z U k i J X b P F R m E q C M Z n f T Y Z C c 4 Z y a o E y L e y u h I 2 p p g x t O h U b g r d 8 8 i q 0 L + q e 5 f v L W u O m i K M M J 3 A K 5 + D B F T T g D p r Q A g Y j e I Z X e H O k 8 + K 8 O x + L 1 p J T z B z D H z m f P 9 m v j X w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " B G + 8 X 3 r Z G o V 3 v z 6 / F F z d q k I g 7 b M = " > A A A B 6 n i c b Z B N S 8 N A E I Y n 9 a v W r 6 p H L 4 t F 8 F Q S E f Q k B S 8 e K 9 o P a E P Z b D f t 0 s 0 m 7 E 6 E E v o T v H h Q x K u / y J v / x m 2 a g 7 a + s P D w z g w 7 8 w a J F A Z d 9 9 s p r a 1 v b G 6 V t y s 7 u 3 v 7 B 9 X D o 7 a J U 8 1 4 i 8 U y 1 t 2 A G i 6 F 4 i 0 U K H k 3 0 Z x G g e S d Y H I 7 r 3 e e u D Y i V o 8 4 T b g f 0 Z E S o W A U r f X Q H X i D a s 2 t u 7 n I K n g F 1 K B Q c 1 D 9 6 g 9 j l k Z c I Z P U m J 7 n J u h n V K N g k s 8 q / d T w hc 4 Z y a o E y L e y u h I 2 p p g x t O h U b g r d 8 8 i q 0 L + q e 5 f v L W u O m i K M M J 3 A K 5 + D B F T T g D p r Q A g Y j e I Z X e H O k 8 + K 8 O x + L 1 p J T z B z D H z m f P 9 m v j X w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " B G + 8 X 3 r Z G o V 3 v z 6 / F F z d q k I g 7 b M = " > A A A B 6 n i c b Z B N S 8 N A E I Y n 9 a v W r 6 p H L 4 t F 8 F Q S E f Q k B S 8 e K 9 o P a E P Z b D f t 0 s 0 m 7 E 6 E E v o T v H h Q x K u / y J v / x m 2 a g 7 a + s P D w z g w 7 8 w a J F A Z d 9 9 s p r a 1 v b G 6 V t y s 7 u 3 v 7 B 9 X D o 7 a J U 8 1 4 i 8 U y 1 t 2 A G i 6 F 4 i 0 U K H k 3 0 Z x G g e S d Y H I 7 r 3 e e u D Y i V o 8 4 T b g f 0 Z E S o W A U r f X Q H X i D a s 2 t u 7 n I K n g F 1 K B Q c 1 D 9 6 g 9 j l k Z c I Z P U m J 7 n J u h n V K N g k s 8 q / d T w h L I J H f G e R U U j b v w s X 3 V G z q w z J G G s 7 V N I c v f 3 R E Y j Y 6 Z R Y D s j i m O z X J u b / 9 V 6 K Y b X f i Z U k i J X b P F R m E q C M Z n f T Y Z C c 4 Z y a o E y L e y u h I 2 p p g x t O h U b g r d 8 8 i q 0 L + q e 5 f v L W u O m i K M M J 3 A K 5 + D B F T T g D p r Q A g Y j e I Z X e H O k 8 + K 8 O x + L 1 p J T z B...
Vine copulas are a flexible tool for multivariate non-Gaussian distributions. For data from an observational study where the explanatory variables and response variables are measured together, a proposed vine copula regression method uses regular vines and handles mixed continuous and discrete variables. This method can efficiently compute the conditional distribution of the response variable given the explanatory variables. The performance of the proposed method is evaluated on simulated data sets and a real data set. The experiments demonstrate that the vine copula regression method is superior to linear regression in making inferences with conditional heteroscedasticity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.