The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
Proceedings of the Second Workshop on Discourse in Machine Translation 2015
DOI: 10.18653/v1/w15-2513
|View full text |Cite
|
Sign up to set email alerts
|

Pronoun Translation and Prediction with or without Coreference Links

Abstract: The Idiap NLP Group has participated in both DiscoMT 2015 sub-tasks: pronounfocused translation and pronoun prediction. The system for the first sub-task combines two knowledge sources: grammatical constraints from the hypothesized coreference links, and candidate translations from an SMT decoder. The system for the second sub-task avoids hypothesizing a coreference link, and uses instead a large set of source-side and target-side features from the noun phrases surrounding the pronoun to train a pronoun predic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2015
2015
2018
2018

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 13 publications
0
9
0
Order By: Relevance
“…The IDIAP (Luong et al, 2015) and the AUTO-POSTEDIT (Guillou, 2015) submissions were phrase-based, built using the same training and tuning resources and methods as the official baseline. Both adopted a two-pass approach involving an automatic post-editing step to correct the pronoun translations output by the baseline system, and both of them relied on the Stanford anaphora resolution software (Lee et al, 2011).…”
Section: Submitted Systemsmentioning
confidence: 99%
“…The IDIAP (Luong et al, 2015) and the AUTO-POSTEDIT (Guillou, 2015) submissions were phrase-based, built using the same training and tuning resources and methods as the official baseline. Both adopted a two-pass approach involving an automatic post-editing step to correct the pronoun translations output by the baseline system, and both of them relied on the Stanford anaphora resolution software (Lee et al, 2011).…”
Section: Submitted Systemsmentioning
confidence: 99%
“…• PE: our post-editing system for the translations of it and they generated by a baseline SMT system (Luong et al, 2015), which was the highest scoring system at the DiscoMT 2015 shared task on pronoun-focused translation. It was trained on the DiscoMT 2015 data and tuned on the IWSLT 2010 development data.…”
Section: Results Using Automatic Metricsmentioning
confidence: 99%
“…The improvement of pronoun translation was only marginal with respect to a baseline SMT system in the 2015 shared task , while the 2016 shared task was only aiming at pronoun prediction given source texts and lemmatized reference translations (Guillou et al, 2016). Some of the best systems developed for these tasks avoided, in fact, the direct use of anaphora resolution (with the exception of Luong et al (2015)). For example, Callin et al (2015) designed a classifier based on a feed-forward neural network, which considered as features the preceding nouns and determiners along with their part-of-speech tags.…”
Section: Coreference-aware Machine Translationmentioning
confidence: 99%