ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
DOI: 10.1109/icassp40776.2020.9053112
|View full text |Cite
|
Sign up to set email alerts
|

Meta Learning for End-To-End Low-Resource Speech Recognition

Abstract: In this paper, we proposed to apply meta learning approach for low-resource automatic speech recognition (ASR). We formulated ASR for different languages as different tasks, and meta-learned the initialization parameters from many pretraining languages to achieve fast adaptation on unseen target language, via recently proposed model-agnostic meta learning algorithm (MAML). We evaluated the proposed approach using six languages as pretraining tasks and four languages as target tasks. Preliminary results showed … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
52
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 76 publications
(54 citation statements)
references
References 21 publications
(21 reference statements)
0
52
0
1
Order By: Relevance
“…Few-shot learning: We consider the problem of adversarial image classification as metalearning-based few-shot classification. There are three data sets [12][13][14][15][16]43 : a training set, a support set, and a test set. Note that the support set and testing set share the same label space, but the training set has its own label space that is disjoint with the support/testing set.…”
Section: Problem Setupmentioning
confidence: 99%
See 3 more Smart Citations
“…Few-shot learning: We consider the problem of adversarial image classification as metalearning-based few-shot classification. There are three data sets [12][13][14][15][16]43 : a training set, a support set, and a test set. Note that the support set and testing set share the same label space, but the training set has its own label space that is disjoint with the support/testing set.…”
Section: Problem Setupmentioning
confidence: 99%
“…There are three data sets: a training set, a support set, and a test set, in the setting of metalearning. [12][13][14][15][16]43 On the ImageNet-A data set and the ImageNet-R data set we choose 30 images of the 200 classes to construct a training set, the remaining samples of these 200 classes are used as support set and test set, and the ratio of these two is 1: 1. On the ImageNet-C and Tiny ImageNet-C data sets, we randomly choose 20 images of each class (75 classes in total) for training and testing.…”
Section: Experimental Settingsmentioning
confidence: 99%
See 2 more Smart Citations
“…speakers was used to train the meta-learner, and finally achieved a lower WER in DNN and TDNN models. Hsu et al [95] proposed MetaASR, which was learned from six source tasks through the Model-Agnostic Meta Learning (MAML) algorithm to obtain a good initialization parameter of the shared encoder that performs quick finetuning for four target tasks. The results showed that MetaASR performed better than MultiASR in only four target languages directly.…”
Section: ) Meta Learningmentioning
confidence: 99%