People counting is one of the hottest issues in sensing applications. The impulse radio ultra-wideband (IR-UWB) radar has been extensively applied to count people, providing a device-free solution without illumination and privacy concerns. However, performance of current solutions is limited in congested environments due to the superposition and obstruction of signals. In this letter, a hybrid feature extraction method based on curvelet transform and distance bin is proposed. 2-D radar matrix features are extracted in multiple scales and multiple angles by applying the curvelet transform. Furthermore, the distance bin is introduced by dividing each row of the matrix into several bins along the propagating distance to select features. The radar signal dataset in three dense scenarios is constructed, including people randomly walking in the constrained area with densities of 3 and 4 persons per square meter, and queueing with an average distance of 10 centimeters. The number of people is up to 20 in the dataset. Four classifiers including decision tree, AdaBoost, random forest and neural network are compared to validate the hybrid features, and random forest performs the highest accuracies of all above 97% in three dense scenarios. Moreover, to ensure the reliability of the hybrid features, three other features including cluster features, activity features and CNN features are compared. The experimental results reveal that the proposed hybrid feature extraction method exhibits stable performance with significantly superior effectiveness.
Face presentation attacks are main threats to face recognition system, and many presentation attack detection (PAD) methods have been proposed in recent few years. Although these methods have achieved significant performance in some specific intrusion modes, difficulties still exist in addressing replayed video attacks. Thats because replayed fake faces contain a variety of aliveness signals such as eye blinking and facial expression changes. Replayed video attacks occurred when attackers try to invade biometric systems by presenting face videos in front of cameras, and these videos are often launched by a liquidcrystal display (LCD) screen. Due to the smearing effects and movements of LCD, videos captured from real and replayed fake faces present different motion blurs, which mainly reflected in blur intensity variation and blur width. Based on these descriptions, a motion blur analysis based method is proposed to deal with replayed video attack problem. We first present a 1D convolutional neural network (CNN) for motion blur intensity variation description in time domain, which consists of a serial of 1D convolutional and pooling filters. Then, a local similar pattern (LSP) feature is introduced to extract blur width. Finally, features extracted from 1D CNN and LSP are fused to detect replayed video attacks. Extensive experiments on two standard face PAD databases, i.e., Relay-Attack and OULU-NPU, indicate that our proposed method based on motion blur analysis significantly outperforms the state-of-the-art methods and show excellent generalization capability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.