2018
DOI: 10.1007/978-3-319-99495-6_29
|View full text |Cite
|
Sign up to set email alerts
|

Youdao’s Winning Solution to the NLPCC-2018 Task 2 Challenge: A Neural Machine Translation Approach to Chinese Grammatical Error Correction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(28 citation statements)
references
References 10 publications
0
27
0
Order By: Relevance
“…Systems for Chinese GEC also rely on sequence to sequence models. The NLPCC 2018 shared task winner uses five different models in tandem, and chooses the best output with a 5-gram language model (Fu et al, 2018). Ren et al (2018) use an ensemble of Convolutional sequence to sequence models with pre-trained word embeddings.…”
Section: Nmt-based Methodsmentioning
confidence: 99%
“…Systems for Chinese GEC also rely on sequence to sequence models. The NLPCC 2018 shared task winner uses five different models in tandem, and chooses the best output with a 5-gram language model (Fu et al, 2018). Ren et al (2018) use an ensemble of Convolutional sequence to sequence models with pre-trained word embeddings.…”
Section: Nmt-based Methodsmentioning
confidence: 99%
“…Grammar error detection and correction has been done for various languages [14], including Chinese [15], Greek [16], and Swedish [17]. Different approaches have been exploited for automatically detecting and correcting text.…”
Section: Grammar Detection and Correction Approachesmentioning
confidence: 99%
“…GEC models contained rule-based model, NMT model and SMT model. Fu et al [31] regarded the CGEC task as a translation problem which translated the wrong sentence into the correct one. Fu et al [32] built the detection model through bidirectional Long Short-Term Memory with a conditional random field layer (BiLSTM-CRF) and the correction model based on the ePMI values and seq2seq model.…”
Section: Related Workmentioning
confidence: 99%