2006
DOI: 10.1162/coli.2006.32.4.527
|View full text |Cite
|
Sign up to set email alerts
|

N-gram-based Machine Translation

Abstract: This article describes in detail an n-gram approach to statistical machine translation. This approach consists of a log-linear combination of a translation model based on n-grams of bilingual units, which are referred to as tuples, along with four specific feature functions. Translation\ud performance, which happens to be in the state of the art, is demonstrated with Spanish-to-English and English-to-Spanish translations of the European Parliament Plenary Sessions (EPPS).Peer Reviewe

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
76
0
6

Year Published

2010
2010
2016
2016

Publication Types

Select...
4
4
2

Relationship

1
9

Authors

Journals

citations
Cited by 140 publications
(82 citation statements)
references
References 16 publications
0
76
0
6
Order By: Relevance
“…Other effective syntactic approaches in recent years have included a bilingual language model (Mariño, Banchs, Crego, de Gispert, Lambert, Fonollosa, and Costa-jussà 2006;Niehues, Herrmann, Vogel, and Waibel 2011) enhanced with some dependency information (Garmash and Monz 2014), specifically the POS tags of parent/grandparent and closest left/right siblings, and modeling a generative dependency structure on top of a classic n-gram language model (Ding and Yamamoto 2014).…”
Section: Related Workmentioning
confidence: 99%
“…Other effective syntactic approaches in recent years have included a bilingual language model (Mariño, Banchs, Crego, de Gispert, Lambert, Fonollosa, and Costa-jussà 2006;Niehues, Herrmann, Vogel, and Waibel 2011) enhanced with some dependency information (Garmash and Monz 2014), specifically the POS tags of parent/grandparent and closest left/right siblings, and modeling a generative dependency structure on top of a classic n-gram language model (Ding and Yamamoto 2014).…”
Section: Related Workmentioning
confidence: 99%
“…N implements the bilingual n-gram approach to SMT as described in (Mariño et al, 2006;Crego and Mariño, 2007), which can be seen as an alternative to the standard phrase-based approach (Zens et al, 2002). N main features include the use of multiple n-gram language models estimated over bilingual units, source words and/or target words or any factor decomposition, lexicalized reordering, several tuple (unigram) models, etc.. As for nearly all current statistical approaches to machine translation, these models are embedded in a linear model combination.…”
Section: Introductionmentioning
confidence: 99%
“…GREAT is based on a bilingual language modelling approach for SMT, which is so far implemented for n-gram models based on the framework of stochastic finitestate transducers (SFSTs). The software offers room for other language models that future developers may want to incorporate, whether they are finite-state representable models (Mariño et al, 2006;Pérez et al, …”
Section: Introductionmentioning
confidence: 99%