Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.675
|View full text |Cite
|
Sign up to set email alerts
|

Towards Modeling Revision Requirements in wikiHow Instructions

Abstract: wikiHow is a resource of how-to guides that describe the steps necessary to accomplish a goal. Guides in this resource are regularly edited by a community of users, who try to improve instructions in terms of style, clarity and correctness. In this work, we test whether the need for such edits can be predicted automatically. For this task, we extend an existing resource of textual edits with a complementary set of approx. 4 million sentences that remain unedited over time and report on the outcome of two revis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
22
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 14 publications
(27 citation statements)
references
References 25 publications
(23 reference statements)
0
22
0
Order By: Relevance
“…Afrin and Litman (2018) introduced a classification model based on Random Forest (RF) for revisions in argumentative essays from ArgRewrite (Zhang et al, 2017) to examine whether we can predict improvement for non-expert and predict if the revised sentence is better than the original. Anthonio et al (2020) worked with edits in instructional texts and applied a supervised learning approach to distinguish older and newer versions of a sentence between wikiHow and Wikipedia. Recent work by Bhat et al (2020) presents an automatic classification of revision requirements in wikiHow, used the BERT model to achieve the highest F1-score, reporting 68.42% predicting revision requirements, outperforming the Naive Bayes and BiLSTM models by 4.39 and 7.67 percentage points, respectively.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Afrin and Litman (2018) introduced a classification model based on Random Forest (RF) for revisions in argumentative essays from ArgRewrite (Zhang et al, 2017) to examine whether we can predict improvement for non-expert and predict if the revised sentence is better than the original. Anthonio et al (2020) worked with edits in instructional texts and applied a supervised learning approach to distinguish older and newer versions of a sentence between wikiHow and Wikipedia. Recent work by Bhat et al (2020) presents an automatic classification of revision requirements in wikiHow, used the BERT model to achieve the highest F1-score, reporting 68.42% predicting revision requirements, outperforming the Naive Bayes and BiLSTM models by 4.39 and 7.67 percentage points, respectively.…”
Section: Related Workmentioning
confidence: 99%
“…Anthonio et al (2020) worked with edits in instructional texts and applied a supervised learning approach to distinguish older and newer versions of a sentence between wikiHow and Wikipedia. Recent work by Bhat et al (2020) presents an automatic classification of revision requirements in wikiHow, used the BERT model to achieve the highest F1-score, reporting 68.42% predicting revision requirements, outperforming the Naive Bayes and BiLSTM models by 4.39 and 7.67 percentage points, respectively. We consider the BERT Model as a strong baseline for our experiments from Bhat et al (2020).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…A shared task on implicit and underspecified language 2021 is the first installment of predicting revision requirements in collaboratively edited instructions (Bhat et al, 2020) based on the wikiHow-ToImprove dataset (Anthonio et al, 2020). The dataset consists of sentences and their revisions if any.…”
Section: Introductionmentioning
confidence: 99%