Interspeech 2019 2019
DOI: 10.21437/interspeech.2019-1837
|View full text |Cite
|
Sign up to set email alerts
|

The VOiCES from a Distance Challenge 2019

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
64
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 54 publications
(64 citation statements)
references
References 22 publications
0
64
0
Order By: Relevance
“…We use two subsets of this data corpus, the development portion of the VOiCES challenge data [24] referred to as V19-dev and the evaluation portion referred to as V19-eval. V19-dev is used for probing experiments as discussed in Section 4.2, as it contains annotations for 200 speaker labels, 12 microphone locations and 4 noise types (none, babble, television, music).…”
Section: Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…We use two subsets of this data corpus, the development portion of the VOiCES challenge data [24] referred to as V19-dev and the evaluation portion referred to as V19-eval. V19-dev is used for probing experiments as discussed in Section 4.2, as it contains annotations for 200 speaker labels, 12 microphone locations and 4 noise types (none, babble, television, music).…”
Section: Datasetsmentioning
confidence: 99%
“…The experimental setup for the controlled conditions is shown in Fig 2. As mentioned in Section 3, the green circles, numbered 1-12, represent microphones located at various distances from the main loudspeaker. In all experiments, the enrolment utterances were collected from source data used to playback from the loudspeaker, consistent with [24]. We choose a different set of test utterances depending on the experiment being performed.…”
Section: Setupmentioning
confidence: 99%
“…Similar to mic-tel adaptation work we added noise to the target domain data (SITW dev rev) with the same procedure mentioned in section 3.2. For evaluation, we also used a recent far field VOiCES corpus [25,26]. We did not add noise to the evaluation corpora.…”
Section: Source and Target Domain Datasetsmentioning
confidence: 99%
“…This makes the deployment of Speaker Verification (SV) systems challenging. To address this, several challenges were organized recently such as NIST Speaker Recognition Evaluation (SRE) 2019, VOiCES from a Distance Challenge [2], and VoxCeleb Speaker Recognition Challenge (VoxSRC) 2019. We consider acoustic feature enhancement as a solution to this problem.…”
Section: Introductionmentioning
confidence: 99%