Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.88
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Prediction of Text Complexity: The Missing Preliminaries for Text Simplification

Abstract: Text simplification reduces the language complexity of professional content for accessibility purposes. End-to-end neural network models have been widely adopted to directly generate the simplified version of input text, usually functioning as a blackbox. We show that text simplification can be decomposed into a compact pipeline of tasks to ensure the transparency and explainability of the process. The first two steps in this pipeline are often neglected: 1) to predict whether a given piece of text needs to be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(16 citation statements)
references
References 37 publications
0
16
0
Order By: Relevance
“…Several existing works have attempted to use a classifer to determine which rewrite operation should be performed on an input at the sentencelevel. Applying a sentence-level binary classifier as an initial step to predict whether simplification should be performed has been found to yield improved SARI results, reducing conservatism and spurious transformations Garbacea et al, 2021).…”
Section: Alvamentioning
confidence: 99%
See 2 more Smart Citations
“…Several existing works have attempted to use a classifer to determine which rewrite operation should be performed on an input at the sentencelevel. Applying a sentence-level binary classifier as an initial step to predict whether simplification should be performed has been found to yield improved SARI results, reducing conservatism and spurious transformations Garbacea et al, 2021).…”
Section: Alvamentioning
confidence: 99%
“…Accuracy on the silver test set (98%) is much higher than previous works: Scarton and Specia (2018) and achieve mean accuracies of 51% and 70% for a similar 4class task. Garbacea et al (2021), who only train a binary (simp, no-simp) classifier achieve 81% accuracy.…”
Section: Classification Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Previous studies criticize existing systems for being opaque, suboptimal and semantically compromising (Garbacea et al, 2021;Maddela et al, 2021;Stajner, 2021). and DMLMTL (Guo et al, 2018), taken from (Garbacea et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…These are semantically close notions that are sometimes employed interchangeably and may generate ambiguity, for example, Naderi et al (2019) use the phrase plain language to refer to language used for people with disabilities and differentiate it from easy language, which they use for language for people with generic reading difficulties, contrary to Baumert (2016) and Maaß (2020). Moreover, even the current conceptualization of text simplification does not satisfy some specialists (Garbacea et al, 2021).…”
Section: Introductionmentioning
confidence: 99%