2023
DOI: 10.36227/techrxiv.22824908.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

BaSFormer: A Balanced Sparsity Regularized Attention Network for Transformer

Abstract: <p>Attention networks often make decisions relying solely on a few pieces of tokens, even if those reliances are not truly indicative of the underlying meaning or intention of the full context. That can lead to over-fitting in Transformers and hinder their ability to generalize. Attention regularization and sparsity-based methods have been used to overcome this issue. However, these methods cannot guarantee that all tokens have sufficient receptive fields for global information inference. Thus, the impac… Show more

Help me understand this report

This publication either has no citations yet, or we are still processing them

Set email alert for when this publication receives citations?

See others like this or search for similar articles