Interspeech 2019 2019
DOI: 10.21437/interspeech.2019-1440
|View full text |Cite
|
Sign up to set email alerts
|

Autoencoder-Based Semi-Supervised Curriculum Learning for Out-of-Domain Speaker Verification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
2

Relationship

3
5

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 0 publications
0
8
0
Order By: Relevance
“…The demographic structure of the data from online smart speakers provides a possible explanation for this improvement. A preliminary investigation on a random sample of 100 online devices suggests at least 70% of the data has high speaker labeling quality [12].…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…The demographic structure of the data from online smart speakers provides a possible explanation for this improvement. A preliminary investigation on a random sample of 100 online devices suggests at least 70% of the data has high speaker labeling quality [12].…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…There are some approaches that takes advantage of large scale unlabelled training data. Curriculum learning is one of them (Marchi et al, 2018;Ranjan and Hansen, 2018;Zheng et al, 2019). It starts by learning certain DNN model using a labeled corpus and continuously introduces unlabeled, out-of-domain text independent speaker samples.…”
Section: Unlabeled Datamentioning
confidence: 99%
“…It starts by learning certain DNN model using a labeled corpus and continuously introduces unlabeled, out-of-domain text independent speaker samples. Both LSTM (Marchi et al, 2018) and TDNN (Zheng et al, 2019) based systems are proposed that outperform baseline methods.…”
Section: Unlabeled Datamentioning
confidence: 99%
“…Semi-supervised learning utilizes limited labeled data in tandem with abundant unlabeled data to obtain higher performances. Stronger performance has been observed when unlabeled data is incorporated [6] [7] [8].…”
Section: Introductionmentioning
confidence: 99%