2021
DOI: 10.48550/arxiv.2104.07951
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Optimal Size-Performance Tradeoffs: Weighing PoS Tagger Models

Abstract: Improvement in machine learning-based NLP performance are often presented with bigger models and more complex code. This presents a trade-off: better scores come at the cost of larger tools; bigger models tend to require more during training and inference time. We present multiple methods for measuring the size of a model, and for comparing this with the model's performance.In a case study over part-of-speech tagging, we then apply these techniques to taggers for eight languages and present a novel analysis id… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 3 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?