2020
DOI: 10.1109/access.2020.2966874
|View full text |Cite
|
Sign up to set email alerts
|

Transfer Learning Algorithm for Enhancing the Unlabeled Speech

Abstract: To improve the generalization ability of speech enhancement algorithms for unlabeled noisy speech, a speech enhancement transfer learning model based on the feature-attention multi-kernel maximum mean discrepancy (FA-MK-MMD) is proposed. To obtain a representation of the shared subspace (the part related with clean speech in the feature extracted by shared encoder) between source domain (speech with known noise and labels) and target domain (speech with unknown noise and no labels), the algorithm takes MK-MMD … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 45 publications
0
1
0
Order By: Relevance
“…In [63], the authors proposed the use of a teacher-student learning strategy to adapt an SE model to unlabeled noisy speech signal. Furthermore, the FA-MK-MMD approach was proposed in [64] to train a neural network model from the labeled source domain to extract the shared representation to enhance the unlabeled input. Although the effectiveness of these SE approaches has been verified, their performance in mobile applications is yet to be confirmed.…”
Section: Introductionmentioning
confidence: 99%
“…In [63], the authors proposed the use of a teacher-student learning strategy to adapt an SE model to unlabeled noisy speech signal. Furthermore, the FA-MK-MMD approach was proposed in [64] to train a neural network model from the labeled source domain to extract the shared representation to enhance the unlabeled input. Although the effectiveness of these SE approaches has been verified, their performance in mobile applications is yet to be confirmed.…”
Section: Introductionmentioning
confidence: 99%