2020
DOI: 10.1007/978-3-030-60276-5_8
|View full text |Cite
|
Sign up to set email alerts
|

Data Augmentation and Loss Normalization for Deep Noise Suppression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
3

Relationship

2
7

Authors

Journals

citations
Cited by 58 publications
(46 citation statements)
references
References 17 publications
0
40
0
Order By: Relevance
“…This resulted in a 95% confidence interval (CI) of 0.03. We also provided the baseline noise suppressor [30] for the participants to benchmark their methods.…”
Section: Evaluation Methodologymentioning
confidence: 99%
“…This resulted in a 95% confidence interval (CI) of 0.03. We also provided the baseline noise suppressor [30] for the participants to benchmark their methods.…”
Section: Evaluation Methodologymentioning
confidence: 99%
“…As shown in Fig. 1, each training sequence, i. e. predicted and target signals, are normalized by the active target utterance level, to ensure balanced optimization for signal-level dependent losses [11].…”
Section: Enhancement System and Training Objectivementioning
confidence: 99%
“…The network proposed in [11], referred to as NSnet2, consists only of fully connected (FC) and gated recurrent unit (GRU) [18] layers in the format FC-GRU-GRU-FC-FC-FC. All FC layers are followed by rectified linear unit (ReLU) activations, except the last layer has sigmoid activations to predict a constrained suppression gain.…”
Section: Nsnet2mentioning
confidence: 99%
“…A possible approach is to train, or adapt the models on the noisy data [ 8 ]. This can be done either by collecting application specific data or through the usage of data augmentation strategies [ 9 ]. However, it has to be considered that gathering large noisy datasets is costly and time consuming while, in general, all possible noisy conditions cannot be known a-priori making unfeasible the data augmentation based approach.…”
Section: Introductionmentioning
confidence: 99%