Interspeech 2021 2021
DOI: 10.21437/interspeech.2021-354
|View full text |Cite
|
Sign up to set email alerts
|

Collaborative Training of Acoustic Encoders for Speech Recognition

Abstract: On-device speech recognition requires training models of different sizes for deploying on devices with various computational budgets. When building such different models, we can benefit from training them jointly to take advantage of the knowledge shared between them. Joint training is also efficient since it reduces the redundancy in the training procedure's data handling operations. We propose a method for collaboratively training acoustic encoders of different sizes for speech recognition. We use a sequence… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
references
References 32 publications
0
0
0
Order By: Relevance