2022
DOI: 10.3390/app121910154
|View full text |Cite
|
Sign up to set email alerts
|

Global–Local Self-Attention Based Transformer for Speaker Verification

Abstract: Transformer models are now widely used for speech processing tasks due to their powerful sequence modeling capabilities. Previous work determined an efficient way to model speaker embeddings using the Transformer model by combining transformers with convolutional networks. However, traditional global self-attention mechanisms lack the ability to capture local information. To alleviate these problems, we proposed a novel global–local self-attention mechanism. Instead of using local or global multi-head attentio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 25 publications
0
0
0
Order By: Relevance
“…Since its introduction by Vaswani [16] and others in 2017, the transformer has achieved great success in the field of natural language processing. It has since expanded to other fields such as image processing [17][18][19], speech processing [20,21], and more. Currently, the most popular ChatGPT [22,23] language model is trained based on the transformer architecture.…”
Section: Transformermentioning
confidence: 99%
“…Since its introduction by Vaswani [16] and others in 2017, the transformer has achieved great success in the field of natural language processing. It has since expanded to other fields such as image processing [17][18][19], speech processing [20,21], and more. Currently, the most popular ChatGPT [22,23] language model is trained based on the transformer architecture.…”
Section: Transformermentioning
confidence: 99%