2018
DOI: 10.1109/access.2018.2851281
|View full text |Cite
|
Sign up to set email alerts
|

Effect of Word Sense Disambiguation on Neural Machine Translation: A Case Study in Korean

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
3

Relationship

2
8

Authors

Journals

citations
Cited by 25 publications
(17 citation statements)
references
References 22 publications
0
17
0
Order By: Relevance
“…Nguyen et al [21] built a lexical network to resolve the disambiguation of Korean, same pronunciation on different meaning words. This will be helpful to improve the deep learningbased model by inferring exact meaning of a word from the lexical network.…”
Section: Text Mining For Korean Languagementioning
confidence: 99%
“…Nguyen et al [21] built a lexical network to resolve the disambiguation of Korean, same pronunciation on different meaning words. This will be helpful to improve the deep learningbased model by inferring exact meaning of a word from the lexical network.…”
Section: Text Mining For Korean Languagementioning
confidence: 99%
“…Even advanced methods, such as statistical-based [38], deep learning-based with recurrent neural networks [39], and embedded word space [40] methods, still encounter the missing data problem because of limited training corpora. Knowledge-based approaches can overcome this problem, but they require an accurate and large lexical network [41], [42]. Korean WordNet KorLex [21], which was constructed by translating English WordNet to Korean, is either used as a knowledge base [43] or combined with the Korean monolingual dictionary [44].…”
Section: Sub-word Conditional Probabilitymentioning
confidence: 99%
“…As mentioned before, the global image features( v im ) are defined as the 4096-size vector in the fc7 layer of VGG19. Then, the cell reads the global visual features corresponding to the current caption and incorporates them with the previously encoded source caption per (9).…”
Section: Image Encodingmentioning
confidence: 99%