Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1198
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Neural Text Simplification

Abstract: The paper presents a first attempt towards unsupervised neural text simplification that relies only on unlabeled text corpora. The core framework is composed of a shared encoder and a pair of attentional-decoders, crucially assisted by discrimination-based losses and denoising. The framework is trained using unlabeled text collected from en-Wikipedia dump. Our analysis (both quantitative and qualitative involving human evaluators) on public test data shows that the proposed model can perform text-simplificatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
63
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 62 publications
(72 citation statements)
references
References 37 publications
(44 reference statements)
0
63
0
1
Order By: Relevance
“…While unsupervised approaches outperform supervised approaches in terms of coverage, they have the disadvantage of only performing one-to-one substitutions and cannot deal with phrases. They also tend to change the meaning of the sentence and have problems dealing with ambiguous words [30] [31] [32]. However, recently, unsupervised approaches have been improved in this regard by allowing more detailed context information to be obtained [33].…”
Section: Nlp Approaches To Lexical Simplificationmentioning
confidence: 99%
“…While unsupervised approaches outperform supervised approaches in terms of coverage, they have the disadvantage of only performing one-to-one substitutions and cannot deal with phrases. They also tend to change the meaning of the sentence and have problems dealing with ambiguous words [30] [31] [32]. However, recently, unsupervised approaches have been improved in this regard by allowing more detailed context information to be obtained [33].…”
Section: Nlp Approaches To Lexical Simplificationmentioning
confidence: 99%
“…Zhang et al (2017) perform a purely lexical simplification of individual words which must be included in the output set [76]. Most TS solutions use subtypes of recurrent neural networks [5,20,31,39,57,60,61,68,75,76]. The use of recurrent neural networks (RNN) is prevalent in the evaluation of languages, because the respective output depends on the previous or additionally subsequent inputs.…”
Section: The Neuronal Sequence-to-sequence Approachmentioning
confidence: 99%
“…4.4.4 Unsupervised Architectures. Surya et al 2019 proposed an unsupervised approach for developing a simplification system. Their motivation was to design an architecture that could be exploited to train SS models for languages or domains that do not have large resources of parallel original-simplified instances.…”
Section: Figurementioning
confidence: 99%
“…Model architecture for UNTS. Extracted from Surya et al (2019). 4.4.5 Simplification as Sequence Labeling.…”
Section: Figurementioning
confidence: 99%