Proceedings of the 6th Workshop on Asian Translation 2019
DOI: 10.18653/v1/d19-5201
|View full text |Cite
|
Sign up to set email alerts
|

Overview of the 6th Workshop on Asian Translation

Abstract: This paper presents the results of the shared tasks from the 6th workshop on Asian translation (WAT2019) including Ja↔En, Ja↔Zh scientific paper translation subtasks, Ja↔En, Ja↔Ko, Ja↔En patent translation subtasks, Hi↔En, My↔En, Km↔En, Ta↔En mixed domain subtasks, Ru↔Ja news commentary translation task, and En→Hi multi-modal translation task. For the WAT2019, 25 teams participated in the shared tasks. We also received 10 research paper submissions out of which 7 1 were accepted. About 400 translation results … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
76
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 67 publications
(77 citation statements)
references
References 13 publications
0
76
1
Order By: Relevance
“…The averaging technique and attentionbased unknown word replacement (Jean et al, 2015;Hashimoto et al, 2016) Cromieres et al (2016) 38.20 82.39 Neubig et al (2015) 38.17 81.38 Eriguchi et al (2016a) 36.95 82.45 Neubig and Duh (2014) 36.58 79.65 Zhu (2015) 36.21 80.91 Lee et al (2015) 35.75 81.15 Again, we see that the translation scores of our model can be further improved by pre-training the model. Table 5 shows our results on the test data, and the previous best results summarized in Nakazawa et al (2016a) and the WAT website 5 are also shown. Our proposed models, LGP-NMT and LGP-NMT+, outperform not only SEQ but also all of the previous best results.…”
Section: Medium Training Datasetmentioning
confidence: 98%
“…The averaging technique and attentionbased unknown word replacement (Jean et al, 2015;Hashimoto et al, 2016) Cromieres et al (2016) 38.20 82.39 Neubig et al (2015) 38.17 81.38 Eriguchi et al (2016a) 36.95 82.45 Neubig and Duh (2014) 36.58 79.65 Zhu (2015) 36.21 80.91 Lee et al (2015) 35.75 81.15 Again, we see that the translation scores of our model can be further improved by pre-training the model. Table 5 shows our results on the test data, and the previous best results summarized in Nakazawa et al (2016a) and the WAT website 5 are also shown. Our proposed models, LGP-NMT and LGP-NMT+, outperform not only SEQ but also all of the previous best results.…”
Section: Medium Training Datasetmentioning
confidence: 98%
“…A common approach for dealing with the open vocabulary issue is to break up rare words into subword units (Schuster and Nakajima, 2012;Chitnis and DeNero, 2015;Sennrich et al, 2016;. Byte-Pair-Encoding (BPE) (Sen- Table 1: Multiple subword sequences encoding the same sentence "Hello World" nrich et al, 2016) is a de facto standard subword segmentation algorithm applied to many NMT systems and achieving top translation quality in several shared tasks (Denkowski and Neubig, 2017;Nakazawa et al, 2017). BPE segmentation gives a good balance between the vocabulary size and the decoding efficiency, and also sidesteps the need for a special treatment of unknown words.…”
Section: Introductionmentioning
confidence: 99%
“…We evaluate our system on the ASPEC-JC Japanese-Chinese corpus, which were shared for the WAT2016 Japanese-to-Chinese translation subtask. This corpus was constructed by manually translating Japanese scientific papers into Chinese [11], [12]. The Japanese scientific papers are either the property of the Japan Science and Technology Agency (JST) or stored in Japan's Largest Electronic Journal Platform for Academic Societies (J-STAGE).…”
Section: Evaluation and Resultsmentioning
confidence: 99%
“…In this paper, we follow the WAT2016 [11] published system as the baseline system, and use [2] as the base NMT system, which follows an encoder-decoder architecture with attention in word-level. In our case, we take advantage of Chinese character information in the character-level NMT.…”
Section: Introductionmentioning
confidence: 99%