2019
DOI: 10.48550/arxiv.1909.06678
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Investigation Into On-device Personalization of End-to-end Automatic Speech Recognition Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 0 publications
0
8
0
Order By: Relevance
“…For our method to work, we need to assume that S < min{d, C}, i.e., the method works when ∆W is aggregated from the weight updates computed from a reasonable number of samples. This is justifiable, since when running distributed training on private data on edge devices, it is a good practice to use a small batch size [17]. For sequence models that output natural language, d and C are usually in the order of thousands or tens of thousands, which is often much larger than S.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…For our method to work, we need to assume that S < min{d, C}, i.e., the method works when ∆W is aggregated from the weight updates computed from a reasonable number of samples. This is justifiable, since when running distributed training on private data on edge devices, it is a good practice to use a small batch size [17]. For sequence models that output natural language, d and C are usually in the order of thousands or tens of thousands, which is often much larger than S.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Sim et al [29] found personalizing a model with a user's contact list improves named entity recall from 2.4% to 73.5% -a massive improvement. Personalizing models to individual users with speech disorders improves word error rates by 64% relative [30]. Personalization can make a huge difference in the quality of the recognition, particularly for groups or domains that are underrepresented in the training data.…”
Section: Personalizationmentioning
confidence: 99%
“…Federated learning (FL) enables deep learning models to learn from decentralized data without compromising privacy [28,40]. Various research directions have been explored to investigate this promising field, such as security analysis for FL [5,6,41], efficient communication for FL [3,25,30], personalization for FL [26,39,52], etc.…”
Section: Related Workmentioning
confidence: 99%