2019
DOI: 10.1109/taslp.2019.2911164
|View full text |Cite
|
Sign up to set email alerts
|

Low Resource Keyword Search With Synthesized Crosslingual Exemplars

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 34 publications
0
7
0
Order By: Relevance
“…We use the extended distance metric learning (EDML) network of [22] to learn the document representation X and the query alphabet Q with a distance function defined as…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We use the extended distance metric learning (EDML) network of [22] to learn the document representation X and the query alphabet Q with a distance function defined as…”
Section: Methodsmentioning
confidence: 99%
“…The multilingual network is then finetuned on each test language and used to obtain posteriorgrams. Using the EDML network described in [22], we obtain 200-dimensional embeddings for DTW-based keyword search from the posteriorgrams.…”
Section: Dataset and Feature Descriptionmentioning
confidence: 99%
“…This program focused on building fully automatic and noise-robust speech recognition and search systems in a very limited amount of time (e.g., one week) and with limited amount of training data. The languages addressed in that program were low-resourced, such as Cantonese, Pashto, Tagalog, Turkish, Vietnamese, Swahili, Tamil and so on, and significant research has been carried out [13,61,[147][148][149][150][151][152][153][154][155][156][157][158][159].…”
Section: Comparison With Previous Std International Evaluationsmentioning
confidence: 99%
“…In the MHA mechanism, the query vector and a set of keyvalue pairs are linearly mapped multiple times respectively [36]- [37]. Each mapping can generate a different attention distribution, and its results are calculated by the Scaled Dot-Product Attention mechanism [35].…”
Section: Multi-head Attention Mechanismmentioning
confidence: 99%