2023
DOI: 10.1016/j.apacoust.2023.109385
|View full text |Cite
|
Sign up to set email alerts
|

MAMGAN: Multiscale attention metric GAN for monaural speech enhancement in the time domain

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(1 citation statement)
references
References 53 publications
0
1
0
Order By: Relevance
“…Another study [ 30 ] proposes a cooperative attention-based speech enhancement model and combines local and non-local attention operations in a learnable and self-adaptive manner. The study [ 31 ] proposes a multi-scale attention metric generative adversarial network to avoid the mismatch between the objective function used to train the speech enhancement models and introduces the attention mechanism in the metric discriminator. Another study uses a Convolutional attention transformer bottleneck in the encoder-decoder framework for speech enhancement and obtains better SE and automatic speech recognition results [ 32 ].…”
Section: Introductionmentioning
confidence: 99%
“…Another study [ 30 ] proposes a cooperative attention-based speech enhancement model and combines local and non-local attention operations in a learnable and self-adaptive manner. The study [ 31 ] proposes a multi-scale attention metric generative adversarial network to avoid the mismatch between the objective function used to train the speech enhancement models and introduces the attention mechanism in the metric discriminator. Another study uses a Convolutional attention transformer bottleneck in the encoder-decoder framework for speech enhancement and obtains better SE and automatic speech recognition results [ 32 ].…”
Section: Introductionmentioning
confidence: 99%