2023
DOI: 10.1109/access.2023.3290908
|View full text |Cite
|
Sign up to set email alerts
|

NSE-CATNet: Deep Neural Speech Enhancement Using Convolutional Attention Transformer Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 64 publications
0
1
0
Order By: Relevance
“…The study [ 31 ] proposes a multi-scale attention metric generative adversarial network to avoid the mismatch between the objective function used to train the speech enhancement models and introduces the attention mechanism in the metric discriminator. Another study uses a Convolutional attention transformer bottleneck in the encoder-decoder framework for speech enhancement and obtains better SE and automatic speech recognition results [ 32 ].…”
Section: Introductionmentioning
confidence: 99%
“…The study [ 31 ] proposes a multi-scale attention metric generative adversarial network to avoid the mismatch between the objective function used to train the speech enhancement models and introduces the attention mechanism in the metric discriminator. Another study uses a Convolutional attention transformer bottleneck in the encoder-decoder framework for speech enhancement and obtains better SE and automatic speech recognition results [ 32 ].…”
Section: Introductionmentioning
confidence: 99%