Interspeech 2019 2019
DOI: 10.21437/interspeech.2019-1148
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-Preserving Siamese Feature Extraction for Gender Recognition versus Speaker Identification

Abstract: In this paper we propose a deep neural-network-based feature extraction scheme with the purpose of reducing the privacy risks encountered in speaker classification tasks. For this we choose a challenging scenario where we wish to perform gender recognition but at the same time prevent an attacker who has intercepted the features to perform speaker identification. Our approach is to employ Siamese training in order to obtain a feature representation that minimizes the Euclidean distance between same gender spea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 9 publications
0
1
0
Order By: Relevance
“…Several studies have attempted to tackle privacy protection in speech processing systems by extracting privacy-preserving features from speeches [6], [7], extracting features from encrypted signals [8], augmenting models with adversarial representations [9], and applying score normalizations [10]. However, such feature-or model-level privacy protection techniques have a critical drawback, wherein the users cannot verify that their personal information is actually removed from the resultant features or models.…”
Section: Introductionmentioning
confidence: 99%
“…Several studies have attempted to tackle privacy protection in speech processing systems by extracting privacy-preserving features from speeches [6], [7], extracting features from encrypted signals [8], augmenting models with adversarial representations [9], and applying score normalizations [10]. However, such feature-or model-level privacy protection techniques have a critical drawback, wherein the users cannot verify that their personal information is actually removed from the resultant features or models.…”
Section: Introductionmentioning
confidence: 99%