Chinese calligraphy is the writing of Chinese characters as an art form performed with brushes so Chinese characters are rich of shapes and details. Recent studies show that Chinese characters can be generated through image-toimage translation for multiple styles using a single model. We propose a novel method of this approach by incorporating Chinese characters' component information into its model. We also propose an improved network to convert characters to their embedding space. Experiments show that the proposed method generates high-quality Chinese calligraphy characters over state-of-the-art methods measured through numerical evaluations and human subject studies.
Compressive sensing (CS) has been used to accelerate dynamic magnetic resonance imaging (DMRI). Currently, the online CS-DMRI is faster, whereas the offline CS-DMRI provides higher accuracy for image reconstruction. To achieve good image reconstruction performance in terms of both speed and accuracy, we propose a hybrid CS-DMRI method using periodic time-variant subsampling for different frames. In each period, there is one reference frame that is sampled at a higher subsampling ratio. The two nearby reference frames with good reconstruction quality can be used to provide rough predictions of the other frames between them. To finely recover the current frame, one structural regularization in the optimization model for reconstruction is a 2-D omnidirectional total variation (OTV) for exploiting the sparsity of the difference between the predicted and estimated frames, and the other is a 3-D OTV as a regularization term for exploiting the bilateral spatio-temporal coherence between the forward reference frame, current frame, and backward reference frame. Compared with classical total variation, the proposed OTV fully utilizes the correlations of all the possible directions of the data. The formulated optimization model can be solved using iterative reweighted least squares with the pre-conditioned conjugate gradient method. Numerical experiments demonstrate that the proposed method has better reconstruction accuracy than all the existing methods and low computational complexity that is comparable to the existing online methods.
Conventional quantization-based watermarking may be easily estimated by averaging on a set of watermarked signals via uniform quantization approach. Moreover, the conventional quantization-based method neglects the visual perceptual characteristics of the host signal; thus, the perceptible distortions would be introduced in some parts of host signal. In this paper, inspired by the Watson’s entropy masking model and logarithmic quantization index modulation (LQIM), a logarithmic quantization-based image watermarking method is developed by using the wavelet transform. Furthermore, the novel method improves the robustness of watermarking based on a logarithmic quantization strategy, which embeds the watermark data into the image blocks with high entropy value. The main significance of this work is that the trade-off between invisibility and robustness is simply addressed by using the logarithmic quantizaiton approach, which applies the entropy masking model and distortion-compensated scheme to develop a watermark embedding method. In this manner, the optimal quantization parameter obtained by minimizing the quantization distortion function effectively controls the watermark strength. In terms of watermark decoding, we model the wavelet coefficients of image by the generalized Gaussian distribution (GGD) and calculate the bit error probability of proposed method. Performance of the proposed method is analyzed and verified by simulation on real images. Experimental results demonstrate that the proposed method has the advantages of imperceptibility and strong robustness against attacks covering JPEG compression, additive white Gaussian noise (AWGN), Gaussian filtering, Salt&Peppers noise, scaling and rotation attack, etc.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.