2018
DOI: 10.1016/j.patcog.2017.11.018
|View full text |Cite
|
Sign up to set email alerts
|

Discriminative binary feature learning and quantization in biometric key generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 41 publications
(22 citation statements)
references
References 38 publications
0
22
0
Order By: Relevance
“… 38 The basic topology and computation plans of FL via the aggregation server and peer‐to‐peer approaches are presented in Figure 3A-F, respectively. Although FL mainly serves for privacy preservation, where aggregation server approaches ensure participants remain unknown from each other, models subject to conditions retain some information 39 . To overcome privacy leakage in the FL framework, differential privacy 40,41 or encrypted data learning approaches 42 have been suggested.…”
Section: Fl Systemmentioning
confidence: 99%
“… 38 The basic topology and computation plans of FL via the aggregation server and peer‐to‐peer approaches are presented in Figure 3A-F, respectively. Although FL mainly serves for privacy preservation, where aggregation server approaches ensure participants remain unknown from each other, models subject to conditions retain some information 39 . To overcome privacy leakage in the FL framework, differential privacy 40,41 or encrypted data learning approaches 42 have been suggested.…”
Section: Fl Systemmentioning
confidence: 99%
“…Specifically, they adopted a transformation method based on shuffling to generate the revocable bio-key. Anees et al [40] presented a bio-key generation method based on binary feature extraction and quantization. However, these methods do not consider the intra-user variations, which makes it difficult to generate stable bio-keys.…”
Section: Key Generation Scheme Based On Biometricsmentioning
confidence: 99%
“…Other than these two major lines of research, the use of cryptographic tools has also been proposed for privacy-preserving machine learning [19,33,34,35,36,37]. Several differential privacy-based solutions have been proposed to prevent the GAN attack, in which a noise signal is added to gradients during the learning phase in order to achieve differential privacy, and hence prevent the GAN attack [38,39].…”
Section: Related Workmentioning
confidence: 99%