2023
DOI: 10.14569/ijacsa.2023.0140181
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional Transformer based Local and Global Feature Learning for Speech Enhancement

Abstract: Speech enhancement (SE) is an important method for improving speech quality and intelligibility in noisy environments where received speech is severely distorted by noise. An efficient speech enhancement system relies on accurately modelling the long-term dependencies of noisy speech. Deep learning has greatly benefited by the use of transformers where long-term dependencies can be modelled more efficiently with multi-head attention (MHA) by using sequence similarity. Transformers frequently outperform recurre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 35 publications
0
1
0
Order By: Relevance
“…A study [28] proposes a cooperative attentionbased SE by combining local and global attention in a selfadaptive way. A convolutional Transformer neural network is proposed to learn the local and global features [29]. A low-complexity swim transformer is proposed in [30] for SE.…”
Section: Introductionmentioning
confidence: 99%
“…A study [28] proposes a cooperative attentionbased SE by combining local and global attention in a selfadaptive way. A convolutional Transformer neural network is proposed to learn the local and global features [29]. A low-complexity swim transformer is proposed in [30] for SE.…”
Section: Introductionmentioning
confidence: 99%