2022
DOI: 10.1016/j.specom.2022.10.003
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of trade-offs between magnitude and phase estimation in loss functions for speech denoising and dereverberation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 34 publications
0
5
0
Order By: Relevance
“…The second benefit is that speech distortion and noise reduction can be better balanced when compared with the raw magnitude without compression, resulting in improving speech quality. This may be the case because compression reduces the dynamic range of the magnitude values, facilitating the training process (Luo et al, 2022). Compression of the magnitude of the noisy spectrum can be expressed by: where α cp false( 0 0.25em 1 false] is the compression factor.…”
Section: Deep Learning Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The second benefit is that speech distortion and noise reduction can be better balanced when compared with the raw magnitude without compression, resulting in improving speech quality. This may be the case because compression reduces the dynamic range of the magnitude values, facilitating the training process (Luo et al, 2022). Compression of the magnitude of the noisy spectrum can be expressed by: where α cp false( 0 0.25em 1 false] is the compression factor.…”
Section: Deep Learning Methodsmentioning
confidence: 99%
“…When the complex spectrum-based MSE loss function is used, the phase estimation error is reduced but spectral magnitude distortion increases. The trade-off between spectral magnitude distortion and phase recovery has been called the “compensation effect” (Wang et al, 2021; Luo et al, 2022). To reduce both magnitude and phase distortion, a combined loss function has been proposed, which is formulated as: where α com is a linear combination coefficient.…”
Section: Deep Learning Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Loss function: To partially avoid the compensation effect between the magnitude and RI constraints in [28] and [29], we use the linear combination of magnitude and complex loss:…”
Section: Magnitude Decodermentioning
confidence: 99%
“…To partially avoid the compensation effect between the magnitude and RI constraints in [28] and [29], we use the linear combination of magnitude and complex loss: scriptLbadbreak=Lmag+Lri2$$\begin{equation} {\cal L} =\frac{{{{\cal L}_{\text{mag}}} + {{\cal L}_{ri}}}}{2} \end{equation}$$ Lmagbadbreak=ESmag,Ŝmag][‖‖SmagŜmag2$$\begin{equation} {{\cal L}_{\text{mag}}}={{\mathbb {E}}_{{S_{\text{mag}}},{{\hat{S}}_{\text{mag}}}}}{\left[ {{{{\left\Vert {{S_{\text{mag}}} - {{\hat{S}}_{\text{mag}}}} \right\Vert} }^2}} \right]} \end{equation}$$ Lribadbreak=ESr,Ŝr][‖‖SrŜr2goodbreak+ESi,Ŝi][‖‖SiŜi2$$\begin{equation} {{\cal L}_{ri}} ={{\mathbb {E}}_{{S_r},{{\hat{S}}_r}}}{\left[ {{{{\left\Vert {{S_r} - {{\hat{S}}_r}} \right\Vert} }^2}} \right]} + {{\mathbb {E}}_{{S_i},{{\hat{S}}_i}}}{\left[ {{{{\left\Vert {{S_i} - {{\hat{S}}_i}} \right\Vert} }^2}} \right]} \end{equation}$$…”
Section: Loss Functionmentioning
confidence: 99%