2017
DOI: 10.1016/j.eswa.2017.02.021
|View full text |Cite
|
Sign up to set email alerts
|

Meaning preservation in Example-based Machine Translation with structural semantics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
4
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…Due to the poor encoder effect of the NMT model, Tan, Z. et al proposed the lattice-to-sequence attentional NMT model, which generalizes the CNN encoder to the lattice topology, and the proposed encoder reduces the negative impact of errors in the model operation, and outputs more expressive and flexible translated sentences [17]. Chua, C. C. et al introduced a structural, semantic approach to the EBMT model based on the introduction of a structural semantics approach to optimize EBMT from the meaning level, and the structural semantics ensures the consistency and completeness of sentences in the input process, thus improving the accuracy of translation [18]. Zhao, Y. et al in order to enhance the impact of semantic information in the visual feature capture, proposed the MNMT approach with semantic image regions, utilized the CNNs to improve the performance of the model, constructed the training dataset of 30k size, the proposed algorithm was tested in simulation experiments, and the experimental results show that the proposed algorithm can extract semantic information that matches the translated text according to the visual features and improve the performance of translation [19].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Due to the poor encoder effect of the NMT model, Tan, Z. et al proposed the lattice-to-sequence attentional NMT model, which generalizes the CNN encoder to the lattice topology, and the proposed encoder reduces the negative impact of errors in the model operation, and outputs more expressive and flexible translated sentences [17]. Chua, C. C. et al introduced a structural, semantic approach to the EBMT model based on the introduction of a structural semantics approach to optimize EBMT from the meaning level, and the structural semantics ensures the consistency and completeness of sentences in the input process, thus improving the accuracy of translation [18]. Zhao, Y. et al in order to enhance the impact of semantic information in the visual feature capture, proposed the MNMT approach with semantic image regions, utilized the CNNs to improve the performance of the model, constructed the training dataset of 30k size, the proposed algorithm was tested in simulation experiments, and the experimental results show that the proposed algorithm can extract semantic information that matches the translated text according to the visual features and improve the performance of translation [19].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Neural machine translation is the mainstream [7], but it can only learn from bilingual parallel corpus, ignoring linguistic knowledge, which leads to the poor quality of translation [8]. The natural language contains a lot of fuzziness, near meaning, and polysemy [9]. Therefore, semantic recognition is of great N. Dang value to NMT.…”
Section: Neural Machine Translation Combined With Semantic Roles 21 Semantic Role Labelingmentioning
confidence: 99%
“…According Wulandari and Harida (2021) state that by writing English sentences according to the grammatical structure, the meaning of it will be accurate, easily understood and acceptable. The findings Tse (2014) also showed that the English writing skill of the secondary male students in schools needs more reinforcement and development (Chua, Lim, Soon, Tang, & Ranaivo-Malançon, 2017).…”
Section: Introductionmentioning
confidence: 95%