Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-3009
|View full text |Cite
|
Sign up to set email alerts
|

EASSE: Easier Automatic Sentence Simplification Evaluation

Abstract: We introduce EASSE, a Python package aiming to facilitate and standardise automatic evaluation and comparison of Sentence Simplification (SS) systems. EASSE provides a single access point to a broad range of evaluation resources: standard automatic metrics for assessing SS outputs (e.g. SARI), wordlevel accuracy scores for certain simplification transformations, reference-independent quality estimation features (e.g. compression ratio), and standard test data for SS evaluation (e.g. TurkCorpus). Finally, EASSE… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
26
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 63 publications
(54 citation statements)
references
References 14 publications
1
26
0
Order By: Relevance
“…Wubben et al (2012;Wang et al (2016). What is significant for TS are the results for automatic measures in Xu et al (2016) followed by Alva-Manchego et al (2019). We draw our inspiration from the authors, along with the results of Xu et al (2012) in our measure propositions.…”
Section: Style Transfer Methodsmentioning
confidence: 79%
“…Wubben et al (2012;Wang et al (2016). What is significant for TS are the results for automatic measures in Xu et al (2016) followed by Alva-Manchego et al (2019). We draw our inspiration from the authors, along with the results of Xu et al (2012) in our measure propositions.…”
Section: Style Transfer Methodsmentioning
confidence: 79%
“…Although some studies consider human judgments on grammaticality, meaning preservation and simplicity the most reliable method for evaluating the sentence simplification task, it is a common practice to use automatic metrics [Alva-Manchego et al 2019]. Following the WikiSplit work, we adopted the BLEU (Bilingual Evaluation Understudy) method [Papineni et al 2002] to validate the results.…”
Section: Resultsmentioning
confidence: 99%
“…In our experiments, we used the implementations of these metrics available in the EASSE package for automatic sentence simplification evaluation (Alva-Manchego et al, 2019). 5 We computed all the scores at sentence-level as in the experiment by Xu et al (2016), where they compared sentencelevel correlations of FKGL, BLEU and SARI with human ratings.…”
Section: Methodsmentioning
confidence: 99%