2014
DOI: 10.1007/s10590-014-9156-x
|View full text |Cite
|
Sign up to set email alerts
|

Indices of cognitive effort in machine translation post-editing

Abstract: Identifying indices of effort in post-editing of machine translation can have a number of applications, including estimating machine translation quality and calculating post-editors' pay rates. Both source-text and machine-output features as well as subjects' traits are investigated here in view of their impact on cognitive effort, which is measured with eye tracking and a subjective scale borrowed from the field of Educational Psychology. Data is analysed with mixed-effects models, and results indicate that t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
14
0
1

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 31 publications
(16 citation statements)
references
References 37 publications
(44 reference statements)
1
14
0
1
Order By: Relevance
“…Here, the rationale is that increasing task load produces a sensation of effort that individuals can report in numerical terms (O'Donnell and Eggemeier 1986, p. 7). In post-editing, subjective ratings have been used as a measure of cognitive effort by, for example, Koponen (2012) and Vieira (2014). Moorkens et al (2015) contrasted subjective ratings with more objective measures, such as eye movements and number of changes in the raw MT output.…”
Section: Review Of Literaturementioning
confidence: 99%
See 1 more Smart Citation
“…Here, the rationale is that increasing task load produces a sensation of effort that individuals can report in numerical terms (O'Donnell and Eggemeier 1986, p. 7). In post-editing, subjective ratings have been used as a measure of cognitive effort by, for example, Koponen (2012) and Vieira (2014). Moorkens et al (2015) contrasted subjective ratings with more objective measures, such as eye movements and number of changes in the raw MT output.…”
Section: Review Of Literaturementioning
confidence: 99%
“…Sentences at different Meteor levels, produced by different systems, were combined into a single machine-translated text to be presented for editing. This was done by randomly selecting sentences at each available decile of the Meteor spectrum (for further details see Vieira 2014). Sentences in the study sample had Meteor scores ranging between 0.14 and 1.…”
Section: Source Text and Machine Translation Outputmentioning
confidence: 99%
“…While HTER is normalised by number of tokens, this could be reflecting a penalisation of extremely short sentences, where any modifications in the MT output will inevitably amount to a large part of the sentence in relative terms -an effect consistent with a pattern observed by O'Brien (2011) and Vieira (2014) for a similar score. However, excluding all occurrences of this sentence from the data did not change results, and the effect still holds for longer sentences, as seen in example (2).…”
Section: Post-editing Effort and Post-edited Qualitymentioning
confidence: 86%
“…Aziz, Koponen and Specia 2014;Vieira 2014;O'Brien 2011). It should be noted that these studies explore the connection of effort with quantitative textual features (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…2 For more information on the study's methodology, see Vieira (2016). Details of the eye-tracking task can also be found in Vieira (2014), where textual features and participants' working memory capacity are contrasted with post-editing effort.…”
Section: Notesmentioning
confidence: 99%