Face anti-spoofing is essential to prevent face recognition systems from a security breach. Much of the progresses have been made by the availability of face anti-spoofing benchmark datasets in recent years. However, existing face antispoofing benchmarks have limited number of subjects (≤ 170) and modalities (≤ 2), which hinder the further development of the academic community. To facilitate face anti-spoofing research, we introduce a large-scale multi-modal dataset, namely CASIA-SURF, which is the largest publicly available dataset for face antispoofing in terms of both subjects and modalities. Specifically, it consists of 1, 000 subjects with 21, 000 videos and each sample has 3 modalities (i.e., RGB, Depth and IR). We also provide comprehensive evaluation metrics, diverse evaluation protocols, training/validation/testing subsets and a measurement tool, developing a new benchmark for face anti-spoofing. Moreover, we present a novel multi-modal multi-scale fusion method as a strong baseline, which performs feature re-weighting to select the more informative channel features while suppressing the less useful ones for each modality across different scales. Extensive experiments have been conducted on the proposed dataset to verify its significance and generalization capability. The dataset is available at https://sites.google.com/qq.com/chalearnfacespoofingattackdete/.
Face anti‐spoofing is critical to prevent face recognition systems from a security breach. The biometrics community has achieved impressive progress recently due to the excellent performance of deep neural networks and the availability of large datasets. Although ethnic bias has been verified to severely affect the performance of face recognition systems, it still remains an open research problem in face anti‐spoofing. Recently, a multi‐ethnic face anti‐spoofing dataset, CASIA‐SURF cross‐ethnicity face anti‐spoofing (CeFA), has been released with the goal of measuring the ethnic bias. It is the largest up to date CeFA dataset covering three ethnicities, three modalities, 1607 subjects, 2D plus 3D attack types and the first dataset including explicit ethnic labels among the recently released datasets for face anti‐spoofing. We organized the Chalearn Face Anti‐spoofing Attack Detection Challenge which consists of single‐modal (e.g. RGB) and multi‐modal (e.g. RGB, Depth, infrared) tracks around this novel resource to boost research aiming to alleviate the ethnic bias. Both tracks have attracted 340 teams in the development stage, and finally, 11 and eight teams have submitted their codes in the single‐modal and multi‐modal face anti‐spoofing recognition challenges, respectively. All of the results were verified and re‐ran by the organizing team, and the results were used for the final ranking. This study presents an overview of the challenge, including its design, evaluation protocol and a summary of results. We analyse the top‐ranked solutions and draw conclusions derived from the competition. Besides, we outline future work directions.
Face presentation attack detection (PAD) is essential to secure face recognition systems primarily from high-fidelity mask attacks. Most existing 3D mask PAD benchmarks suffer from several drawbacks: 1) a limited number of mask identities, types of sensors, and a total number of videos; 2) lowfidelity quality of facial masks. Basic deep models and remote photoplethysmography (rPPG) methods achieved acceptable performance on these benchmarks but still far from the needs of practical scenarios. To bridge the gap to real-world applications, we introduce a large-scale High-Fidelity Mask dataset, namely HiFiMask. Specifically, a total amount of 54, 600 videos are recorded from 75 subjects with 225 realistic masks by 7 new kinds of sensors. Along with the dataset, we propose a novel Contrastive Context-aware Learning (CCL) framework. CCL is a new training methodology for supervised PAD tasks, which is able to learn by leveraging rich contexts accurately (e.g., subjects, mask material and lighting) among pairs of live faces and high-fidelity mask attacks. Extensive experimental evaluations on HiFiMask and three additional 3D mask datasets demonstrate the effectiveness of our method. The codes and dataset will be released soon.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.