Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing Into Enhanced 2021
DOI: 10.18653/v1/2021.iwpt-1.12
|View full text |Cite
|
Sign up to set email alerts
|

A Modest Pareto Optimisation Analysis of Dependency Parsers in 2021

Abstract: We evaluate three leading dependency parser systems from different paradigms on a small yet diverse subset of languages in terms of their accuracy-efficiency Pareto front. As we are interested in efficiency, we evaluate core parsers without pretrained language models (as these are typically huge networks and would constitute most of the compute time) or other augmentations that can be transversally applied to any of them. Biaffine parsing emerges as a well-balanced default choice, with sequence-labelling parsi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 38 publications
1
2
0
Order By: Relevance
“…Our incremental-decoder-only models with LLMs as encoders are competitive against the BiLSTM-based version of the baseline (BiLSTM encoder, biaffine decoder), surpassing it on 7 out of 12 languages. However, they are a few points behind with respect to a version of the biaffine parser using RoBERTa encodings (which can be taken as a state-of-the-art system), consistent with existing comparisons of sequence-labeling parsers and biaffine parsers (Anderson and Gómez-Rodríguez, 2021). Put together, this seems to suggest that the challenge of incrementality falls mostly on the encoding side.…”
Section: Methodssupporting
confidence: 77%
“…Our incremental-decoder-only models with LLMs as encoders are competitive against the BiLSTM-based version of the baseline (BiLSTM encoder, biaffine decoder), surpassing it on 7 out of 12 languages. However, they are a few points behind with respect to a version of the biaffine parser using RoBERTa encodings (which can be taken as a state-of-the-art system), consistent with existing comparisons of sequence-labeling parsers and biaffine parsers (Anderson and Gómez-Rodríguez, 2021). Put together, this seems to suggest that the challenge of incrementality falls mostly on the encoding side.…”
Section: Methodssupporting
confidence: 77%
“…solving the quadratic problem: min w∈∆ K w T U w. To this end, using some well-known equalities in the RKHS 1 , we arrive at the following formula:…”
Section: Our Theoretical Developmentmentioning
confidence: 99%
“…On the other side, multi-objective optimization (MOO) [8] aims to optimize a set of objective functions and manifests itself in many real-world applications problems, such as in multi-task learning (MTL) [19,27], natural language processing [1], and reinforcement learning [10,25,24]. Leveraging the above insights, it is natural to ask: "Can we derive a probabilistic version of multi-objective • Demonstrate our algorithm is readily applicable in the context of multi-task learning.…”
Section: Introductionmentioning
confidence: 99%