2021
DOI: 10.31219/osf.io/azfu2
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Optimal Size-Performance Tradeoffs: Weighing PoS Tagger Models

Abstract: Improvement in machine learning-based NLP performance are often presented with bigger models and more complex code. This presents a trade-off: better scores come at the cost of larger tools; bigger models tend to require more during training and inference time. We present multiple methods for measuring the size of a model, and for comparing this with the model's performance.In a case study over part-of-speech tagging, we then apply these techniques to taggers for eight languages and present a novel analysis id… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 23 publications
0
0
0
Order By: Relevance