2022
DOI: 10.3390/app122312055
|View full text |Cite
|
Sign up to set email alerts
|

Compressing BERT for Binary Text Classification via Adaptive Truncation before Fine-Tuning

Abstract: Large-scale pre-trained language models such as BERT have brought much better performance to text classification. However, their large sizes can lead to sometimes prohibitively slow fine-tuning and inference. To alleviate this, various compression methods have been proposed; however, most of these methods solely consider reducing inference time, often ignoring significant increases in training time, and thus are even more resource consuming. In this article, we focus on lottery ticket extraction for the BERT a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 33 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?