Biometric data is user-identifiable and therefore methods to use biometrics for authentication have been widely researched. Biometric cryptosystems allow for a user to derive a cryptographic key from noisy biometric data and perform a cryptographic task for authentication or encryption. The fuzzy extractor is known as a prominent biometric cryptosystem. However, the fuzzy extractor has a drawback in that a user is required to store user-specific helper data or receive it online from the server with additional trusted channel, to derive a correct key. In this paper, we present a new biometric-based key derivation function (BB-KDF) to address the issues. In our BB-KDF, users are able to derive cryptographic keys solely from their own biometric data: users do not need any other user-specific helper information. We introduce a security model for the BB-KDF. We then construct the BB-KDF and prove its security in our security model. We then propose an authentication protocol based on the BB-KDF. Finally, we give experimental results to analyze the performance of the BB-KDF. We show that our proposed BB-KDF is computationally efficient and can be deployed on many different kinds of devices.
In this study, we proposed a multitask network architecture for three attributes, landmark, head pose, and occlusion, from a face image. A 2-stacked hourglass with three task-specific heads is the proposed network architecture. We also designed three auxiliary components for the network. First is the feature pyramid fusion module, which plays a crucial role in facilitating contextual information from various receptive fields. Second is the interlevel occlusion-aware fusion module, which explicitly fuses intermediate occlusion prediction between subnetworks. The third is the gimbal-lock-free head pose head, which outputs a rotation matrix from a 6D rotation representation. We conducted an ablative study of these auxiliary components to determine their impacts on the network. Additionally, we introduced the landmark heatmap scaling approach to avoid falling local minima. We trained the proposed network with a 300W-LP dataset for landmark and head pose and a C-CM dataset for occlusion. Then, we fine-tuned the network using the 300W or WFLW dataset, instead of the 300W-LP dataset for the landmark task. This 2-stage training method contributes to enhancing the landmark detection accuracy and that of other tasks. In the experiments, we assessed the proposed network using eight test datasets and task-specific metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.