2019
DOI: 10.48550/arxiv.1910.10762
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Analyzing ASR pretraining for low-resource speech-to-text translation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…The system is trained only on transcribed SLT data, with two auxiliary tasks: pretraining the encoder and decoder with ASR and textual MT respectively. Stoian et al (2019) compare the effects of pretraining on auxiliary ASR datasets of different languages and sizes, concluding that the WER of the ASR system is more predictive of the final translation quality than language relatedness. Anastasopoulos and Chiang (2018) make the line between pipeline and end-to-end approaches more blurred by using a multi-task learning setup with two-step decoding.…”
Section: End-to-end Spoken Language Translationmentioning
confidence: 99%
“…The system is trained only on transcribed SLT data, with two auxiliary tasks: pretraining the encoder and decoder with ASR and textual MT respectively. Stoian et al (2019) compare the effects of pretraining on auxiliary ASR datasets of different languages and sizes, concluding that the WER of the ASR system is more predictive of the final translation quality than language relatedness. Anastasopoulos and Chiang (2018) make the line between pipeline and end-to-end approaches more blurred by using a multi-task learning setup with two-step decoding.…”
Section: End-to-end Spoken Language Translationmentioning
confidence: 99%
“…We use speech features pre-trained on English, and first examine a high-resource within-language English-to-X ST setting (X denotes a non-English language), then we transfer the representations to 11 lower-resource X-to-English ST tasks. Transferring the parameters learned on a higher-resource ASR task has been shown to be an effective way to improve the performance and ameliorate the training of low-resource ST [16,17,18], thus we also study the interactions with selfsupervised representations and whether we can effectively combine both methods.…”
Section: Introductionmentioning
confidence: 99%