2021 Edition of the Automatic Speaker Verification and Spoofing Countermeasures Challenge 2021
DOI: 10.21437/asvspoof.2021-11
|View full text |Cite
|
Sign up to set email alerts
|

The Biometric Vox System for the ASVspoof 2021 Challenge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 3 publications
0
4
0
Order By: Relevance
“…As test sets of ADD 2022 include unseen genuine and fake utterances which are not present in train and development data, it is essential to develop CMs that are robust to out-of-domain data. Data augmentation strategy is efficient to improve the performance of anti-spoofing systems on cross-dataset in previous works [23][24][25][26][27][28][29][30][31][32][33]. Thus, we design low-quality data augmentation strategy to address the unseen genuine and fake utterances.…”
Section: Data Augmentationmentioning
confidence: 99%
“…As test sets of ADD 2022 include unseen genuine and fake utterances which are not present in train and development data, it is essential to develop CMs that are robust to out-of-domain data. Data augmentation strategy is efficient to improve the performance of anti-spoofing systems on cross-dataset in previous works [23][24][25][26][27][28][29][30][31][32][33]. Thus, we design low-quality data augmentation strategy to address the unseen genuine and fake utterances.…”
Section: Data Augmentationmentioning
confidence: 99%
“…To verify that this improvement is not due to random factors in neural network training (e.g., different, random initial network weights), we conducted a statistical analysis of the results following [69]. The results 6 suggest that the improvement is statistically significant and is hence unlikely to be caused by factors other than DA. Its effect is more pronounced when using the wav2vec 2.0 front-end for which the EER decreases from 4.48% to 0.82%.…”
Section: Data Augmentationmentioning
confidence: 99%
“…Data augmentation (DA) is already known to reduce overfitting and hence to improve generalisation [3,4,6,7,27] and is particularly effective in the case of LA scenarios in which there is substantial variability stemming from, e.g., encoding, transmission and acquisition devices [2]. We are interested to determine whether self-supervised learning is complementary to DA.…”
Section: Data Augmentationmentioning
confidence: 99%
See 1 more Smart Citation