Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task 2014
DOI: 10.3115/v1/w14-1701
|View full text |Cite
|
Sign up to set email alerts
|

The CoNLL-2014 Shared Task on Grammatical Error Correction

Abstract: The CoNLL-2014 shared task was devoted to grammatical error correction of all error types. In this paper, we give the task definition, present the data sets, and describe the evaluation metric and scorer used in the shared task. We also give an overview of the various approaches adopted by the participating teams, and present the evaluation results. Compared to the CoNLL-2013 shared task, we have introduced the following changes in CoNLL-2014: (1) A participating system is expected to detect and correct gramma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
180
0
6

Year Published

2014
2014
2019
2019

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 395 publications
(222 citation statements)
references
References 14 publications
0
180
0
6
Order By: Relevance
“…The two most recent CoNLL shared tasks were devoted to grammatical error correction for non-native writers (Ng et al, 2013;Ng et al, 2014).…”
Section: Introductionmentioning
confidence: 99%
“…The two most recent CoNLL shared tasks were devoted to grammatical error correction for non-native writers (Ng et al, 2013;Ng et al, 2014).…”
Section: Introductionmentioning
confidence: 99%
“…It is worth noting that by comparison, punctuation errors only constituted 4% of the English data in CoNLL 2013 Shared Task on English Grammatical Error Correction (Ng et al, 2013) and were not evaluated or handled by any participant. In HASP, we focus on 6 punctuation marks: comma, colon, semi-colon, exclamation mark, question mark and period.…”
Section: Punctuation Errorsmentioning
confidence: 99%
“…7 We observe the relatively low recall obtained by the models. Error correction models tend to have low recall (see, for example, the recent shared tasks on ESL error correction (Dale and Kilgarriff, 2011;Dale et al, 2012;Ng et al, 2013)). The key reason for the low recall is the error sparsity: over 95% of verbs are correct, as shown in Table 9.…”
Section: Identification Vs Correctionmentioning
confidence: 99%
“…We also evaluate several methods for selecting verb candidates and show the significance of this step for improving verb error correction performance, while earlier studies do not discuss this aspect of the problem. In the CoNLL shared task (Ng et al, 2013) that included verb errors in agreement and form, the participating teams did not provide details on how specific challenges were handled, but the University of Illinois system obtained the highest score on the verb sub-task, even though all teams used similar resources (Ng et al, 2013).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation