Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics - ACL '05 2005
DOI: 10.3115/1219840.1219907
|View full text |Cite
|
Sign up to set email alerts
|

Machine translation using probabilistic synchronous dependency insertion grammars

Abstract: Syntax-based statistical machine translation (MT) aims at applying statistical models to structured data. In this paper, we present a syntax-based statistical machine translation system based on a probabilistic synchronous dependency insertion grammar. Synchronous dependency insertion grammars are a version of synchronous grammars defined on dependency trees. We first introduce our approach to inducing such a grammar from parallel corpora. Second, we describe the graphical model for the machine translation tas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
98
0
1

Year Published

2009
2009
2018
2018

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 107 publications
(99 citation statements)
references
References 20 publications
0
98
0
1
Order By: Relevance
“…Besides the work of Vaswani et al (2011) discussed in Section 1, there are several other works using a rule bigram or trigram model in machine translation, Ding and Palmer (2005) use n-gram rule Markov model in the dependency treelet model, Liu and Gildea (2008) applies the same method in a tree-tostring model. Our work is different from theirs in that we lift the Markov assumption and use recurrent neural network to capture much longer contextual information to help probability prediction.…”
Section: Related Workmentioning
confidence: 99%
“…Besides the work of Vaswani et al (2011) discussed in Section 1, there are several other works using a rule bigram or trigram model in machine translation, Ding and Palmer (2005) use n-gram rule Markov model in the dependency treelet model, Liu and Gildea (2008) applies the same method in a tree-tostring model. Our work is different from theirs in that we lift the Markov assumption and use recurrent neural network to capture much longer contextual information to help probability prediction.…”
Section: Related Workmentioning
confidence: 99%
“…Although sentence structure of source language has been taken into consideration, most SMT systems make use of syntax information in decoding stage (Lin, 2004;Ding and Palmer, 2005;Quirk et al, 2005;Liu et al, 2006, Huang et al, 2006. Wang et al(2007) firstly incorporate a Chinese syntactic reordering method into preprocessing stage of a statistical MT system, and achieve a significant improvement in reordering accuracy.…”
Section: Related Workmentioning
confidence: 99%
“…At any visited node x, if inward edge is complement function (c) then a Type-I tree will be built where the visited node is the root and the part-of-speech of its children which have complement function are Type-I tree's children (line [3][4][5][6]25). If an adjunct relation (a) with an outward edge is found, there are two possible cases: if a child of a visited node y doesn't have any children, a Type-II tree will be built immediately by using the part-of-speech of the visited node x as the root and its child node y as the child (line [8][9][10][11]. Otherwise, if a child of the visited node y has children, the algorithm will do like constructing a Type-I tree in order to produce a sub- tree (line [14][15][16][17][18][19].…”
Section: Extracting Elementary Trees From the Treebankmentioning
confidence: 99%
“…Nowadays, projective and non-projective dependency structures have recently become quite fashionable in various NLP areas, such as MT [8], Information Extraction [9], Text Summarization [10], and Ontology [11].…”
Section: Introductionmentioning
confidence: 99%