Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1435
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Iterative Edit Models for Local Sequence Transduction

Abstract: We present a Parallel Iterative Edit (PIE) model for the problem of local sequence transduction arising in tasks like Grammatical error correction (GEC). Recent approaches are based on the popular encoder-decoder (ED) model for sequence to sequence learning. The ED model auto-regressively captures full dependency among output tokens but is slow due to sequential decoding. The PIE model does parallel decoding, giving up the advantage of modelling full dependency in the output, yet it achieves accuracy competiti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
88
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 109 publications
(133 citation statements)
references
References 28 publications
1
88
0
Order By: Relevance
“…LaserTagger ) is a general approach that has been shown to perform well on a number of text editing tasks, but it has two limitations: it does not allow for arbitrary reordering of the input tokens; and insertions are restricted to a fixed phrase vocabulary that is derived from the training data. Similarly, Ed-itNTS (Dong et al, 2019) and PIE (Awasthi et al, 2019) are two other text-editing models developed specifically for simplification and grammatical error correction, respectively.…”
Section: Related Workmentioning
confidence: 99%
“…LaserTagger ) is a general approach that has been shown to perform well on a number of text editing tasks, but it has two limitations: it does not allow for arbitrary reordering of the input tokens; and insertions are restricted to a fixed phrase vocabulary that is derived from the training data. Similarly, Ed-itNTS (Dong et al, 2019) and PIE (Awasthi et al, 2019) are two other text-editing models developed specifically for simplification and grammatical error correction, respectively.…”
Section: Related Workmentioning
confidence: 99%
“…Choe improved the model performance by dividing the GEC task into the generation mechanism, which produces new words, and the copy mechanism, which copies the words of the entered sentence and trains them separately [6]. Awasthi limited the parallel iterative edit [1] and applied BERT [10]. Chollampatt conducted research that applied the CNN model and quality estimation to the GEC task [8].…”
Section: Related Work a Nmt-based Gecmentioning
confidence: 99%
“…The difference with GLEU is that it considers the source sentence and it is a performance evaluation metric that is specialized for the correction system. The majority of current research uses this metric as the official metric of GEC [1,6,7,8,13,16,17,27].…”
Section: B Metrics For Gecmentioning
confidence: 99%
See 2 more Smart Citations