To enhance the security level of digital information, the biometric authentication method based on Electrocardiographic (ECG) is gaining increasing attention in a wide range of applications. Compared with other biometric features, e.g., fingerprint and face, the ECG signals have several advantages, such as higher security, simpler acquisition, liveness detection, and health information. Therefore, various methods for ECG-based authentication have been proposed. However, the generalization ability of these methods is limited because the feature extraction for the ECG signals in conventional methods is data dependent. To improve the generalization ability and achieve more stable results on different datasets, a parallel multiscale one-dimensional residual network is proposed in this paper. This network utilizes three convolutional kernels with different kernel sizes, achieving better classification accuracy than the conventional schemes. Moreover, two loss functions named center loss and margin loss are used during the training of the network. Compared with the conventional softmax loss, these two loss functions can further improve the generalization ability of the extracted embedding features. Furthermore, we evaluate the effectiveness of our proposed method thoroughly on the ECG-ID database, the PTB Diagnostic ECG database, and the MIT-BIH Arrhythmia database, achieving 2.00%, 0.59%, and 4.74% of equal error rate (EER), respectively. Compared with other works, our proposed method improves 1.61% and 4.89% classification accuracy on the ECG-ID database and the MIT-BIH Arrhythmia database, respectively.
Human video motion transfer (HVMT) aims to synthesize videos that one person imitates other persons' actions. Although existing GAN-based HVMT methods have achieved great success, they either fail to preserve appearance details due to the loss of spatial consistency between synthesized and exemplary images, or generate incoherent video results due to the lack of temporal consistency among video frames. In this paper, we propose Coarse-to-Fine Flow Warping Network (C2F-FWN) for spatial-temporal consistent HVMT. Particularly, C2F-FWN utilizes coarse-to-fine flow warping and Layout-Constrained Deformable Convolution (LC-DConv) to improve spatial consistency, and employs Flow Temporal Consistency (FTC) Loss to enhance temporal consistency. In addition, provided with multi-source appearance inputs, C2F-FWN can support appearance attribute editing with great flexibility and efficiency. Besides public datasets, we also collected a large-scale HVMT dataset named SoloDance for evaluation. Extensive experiments conducted on our SoloDance dataset and the iPER dataset show that our approach outperforms state-of-art HVMT methods in terms of both spatial and temporal consistency. Source code and the SoloDance dataset are available at https://github.com/wswdx/C2F-FWN.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.