2023
DOI: 10.1109/access.2023.3343738
|View full text |Cite
|
Sign up to set email alerts
|

Speech Enhancement Using Dynamic Learning in Knowledge Distillation via Reinforcement Learning

Shih-Chuan Chu,
Chung-Hsien Wu,
Tsai-Wei Su

Abstract: In recent years, most of the research on speech enhancement (SE) has applied different strategies to improve performance through deep neural network models. However, as the performance improves, the memory resources and computational requirements of the model also increase, making it difficult to directly apply them to edge computing. Therefore, various model compression and acceleration techniques are desired. This paper proposes a learning method that dynamically uses Knowledge Distillation (KD) to teach a s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 48 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?