Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.414
|View full text |Cite
|
Sign up to set email alerts
|

Text Editing by Command

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(25 citation statements)
references
References 27 publications
0
17
0
Order By: Relevance
“…To simplify this problem, we employ the n-th order Markov assumption, assuming that the probability of the next document is conditioned only on the previous n documents p(x i |x i−1 i−n ). This probability could be modeled directly, and in fact in the case of n = 1 this becomes analogous to the single-step editing problem tackled by previous work (Yin et al, 2019a;Malmi et al, 2019;Reid and Zhong, 2021;Faltings et al, 2021). To our knowledge, no previous work has modeled natural editing processes with n > 1.…”
Section: Modeling Operations Andmentioning
confidence: 87%
See 3 more Smart Citations
“…To simplify this problem, we employ the n-th order Markov assumption, assuming that the probability of the next document is conditioned only on the previous n documents p(x i |x i−1 i−n ). This probability could be modeled directly, and in fact in the case of n = 1 this becomes analogous to the single-step editing problem tackled by previous work (Yin et al, 2019a;Malmi et al, 2019;Reid and Zhong, 2021;Faltings et al, 2021). To our knowledge, no previous work has modeled natural editing processes with n > 1.…”
Section: Modeling Operations Andmentioning
confidence: 87%
“…When extracting each edit we keep the edit summary (akin to a commit message) supplied by the editor at time of editing. We then curate these comments and develop a dataset for usage on downstream tasks-for both edit summary generation (Loyola et al, 2017) and edit-summaryconditioned text editing (Faltings et al, 2021).…”
Section: Wikirevisionsmentioning
confidence: 99%
See 2 more Smart Citations
“…Experimental results on four tasks demonstrate the effectiveness of LEMON: the execution-guided pre-training strategy brings significant improvements on all of them and LEMON achieves the state-of-the-art performance on three of them. For future work, we hope to extend our approach to more complex environments and tasks such as image editing (Fu et al, 2020) and text editing (Faltings et al, 2021).…”
Section: Discussionmentioning
confidence: 99%