Interspeech 2018 2018
DOI: 10.21437/interspeech.2018-2259
|View full text |Cite
|
Sign up to set email alerts
|

Online Incremental Learning for Speaker-Adaptive Language Models

Abstract: Voice control is a prominent interaction method on personal computing devices. While automatic speech recognition (ASR) systems are readily applicable for large audiences, there is room for further adaptation at the edge, ie. locally on devices, targeted for individual users. In this work, we explore improving ASR systems over time through a user's own interactions. Our online learning approach for speaker-adaptive language modeling leverages a user's most recent utterances to enhance the speaker dependent fea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…Knowledge Neurons , ROME (Meng et al, 2022a), MEMIT , MEMIT CSK (Gupta et al, 2023a), PMET Replay-based Mix-Review (He et al, 2021b), ELLE (Qin et al, 2022), CT0 (Scialom et al, 2022 Architecturalbased K-Adapter (Wang et al, 2021), LoRA , ELLE (Qin et al, 2022), DEMix-DAPT (Gururangan et al, 2022, CPT (Ke et al, 2022), Lifelong-MoE (Chen et al, 2023a), ModuleFormer (Shen et al, 2023 Other Temporal-LM (Dhingra et al, 2022), Lifelong Pre-training (Jin et al, 2022), CKL (Jang et al, 2022b), TemporalWiKi (Jang et al, 2022a), TopicPrefix (Lee et al, 2022b), KILM (Xu et al, 2023a), SeMem (Peng et al, 2023b), CaMeLS (Hu et al, 2023), , Gupta et al (2023b) Continual Knowledge Editing CMR , CL-plugin (Lee et al, 2022a), Transformer-Patcher , GRACE (Hartvigsen et al, 2023) Explicit ( §2.2)…”
Section: A13 Continual Learningmentioning
confidence: 99%
“…Knowledge Neurons , ROME (Meng et al, 2022a), MEMIT , MEMIT CSK (Gupta et al, 2023a), PMET Replay-based Mix-Review (He et al, 2021b), ELLE (Qin et al, 2022), CT0 (Scialom et al, 2022 Architecturalbased K-Adapter (Wang et al, 2021), LoRA , ELLE (Qin et al, 2022), DEMix-DAPT (Gururangan et al, 2022, CPT (Ke et al, 2022), Lifelong-MoE (Chen et al, 2023a), ModuleFormer (Shen et al, 2023 Other Temporal-LM (Dhingra et al, 2022), Lifelong Pre-training (Jin et al, 2022), CKL (Jang et al, 2022b), TemporalWiKi (Jang et al, 2022a), TopicPrefix (Lee et al, 2022b), KILM (Xu et al, 2023a), SeMem (Peng et al, 2023b), CaMeLS (Hu et al, 2023), , Gupta et al (2023b) Continual Knowledge Editing CMR , CL-plugin (Lee et al, 2022a), Transformer-Patcher , GRACE (Hartvigsen et al, 2023) Explicit ( §2.2)…”
Section: A13 Continual Learningmentioning
confidence: 99%