2020
DOI: 10.48550/arxiv.2005.14435
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sub-Band Knowledge Distillation Framework for Speech Enhancement

Xiang Hao,
Shixue Wen,
Xiangdong Su
et al.

Abstract: In single-channel speech enhancement, methods based on fullband spectral features have been widely studied. However, only a few methods pay attention to non-full-band spectral features. In this paper, we explore a knowledge distillation framework based on sub-band spectral mapping for single-channel speech enhancement. Specifically, we divide the full frequency band into multiple sub-bands and pre-train an elite-level sub-band enhancement model (teacher model) for each sub-band. These teacher models are dedica… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 21 publications
(22 reference statements)
0
1
0
Order By: Relevance
“…In addition, a few studies have exploited the advantages of KD for SE. In the TF domain, [21] proposed a sub-band KD framework to improve a single student network using multiple teachers trained for each sub-band. [22] designed a two-stage training distillation method and a co-worker-based network to improve the performance of SE.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, a few studies have exploited the advantages of KD for SE. In the TF domain, [21] proposed a sub-band KD framework to improve a single student network using multiple teachers trained for each sub-band. [22] designed a two-stage training distillation method and a co-worker-based network to improve the performance of SE.…”
Section: Introductionmentioning
confidence: 99%