Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations 2022
DOI: 10.18653/v1/2022.acl-demo.14
|View full text |Cite
|
Sign up to set email alerts
|

TS-ANNO: An Annotation Tool to Build, Annotate and Evaluate Text Simplification Corpora

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 0 publications
0
1
0
Order By: Relevance
“…Yamaguchi et al (2023) annotates simplifications of earlier models such as DRESS (Zhang and Lapata, 2017) and SUC (Sun et al, 2020) using a taxonomy of 62 error categories, but do not analyze the SOTA, MUSS, or LLMs. Stodden and Kallmeyer (2022) proposes an interactive linguistic inspection interface, but this interface is not designed for human evaluation of model outputs and does not provide ratings for measuring performance.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Yamaguchi et al (2023) annotates simplifications of earlier models such as DRESS (Zhang and Lapata, 2017) and SUC (Sun et al, 2020) using a taxonomy of 62 error categories, but do not analyze the SOTA, MUSS, or LLMs. Stodden and Kallmeyer (2022) proposes an interactive linguistic inspection interface, but this interface is not designed for human evaluation of model outputs and does not provide ratings for measuring performance.…”
Section: Related Workmentioning
confidence: 99%
“…The relativizer 'that' creates no syntactic or conceptual simplicity, but adds clarity as to the identify of the subject. Trivial changes have previously been described with finer granularity, including subcategories like abbreviation, filler words, compound segmentation, anaphora (Stodden and Kallmeyer, 2022) or even changes in number/date formatting (Cardon et al, 2022) but we exclude these groups due to their sparsity and our focus on evaluating performance.…”
Section: A13 Lexical Simplificationmentioning
confidence: 99%
See 2 more Smart Citations
“…These results have been further improved with a sentence-based approach (Ebling et al, 2022). Most recently, the first detailed surveys about German text simplification have been released (Anschütz et al, 2023;Stodden et al, 2023;Schomacker et al, 2023).…”
Section: Related Workmentioning
confidence: 99%