2018
DOI: 10.52034/lanstts.v16i0.454
|View full text |Cite
|
Sign up to set email alerts
|

Assessing quality in live interlingual subtitling: a new challenge

Abstract: Quality-assessment models for live interlingual subtitling are virtually non-existent. In this study we investigate whether and to what extent existing models from related translation modes, more specifically the Named Entity Recognition (NER) model for intralingual live subtitling, provide a good starting point. Having conducted a survey of the major quality parameters in different forms of subtitling, we proceed to adapt this model. The model measures live intralingual quality on the basis of different types… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…Whether quality can be quantified in the first place can be debatable, of course, since it necessarily implies a certain degree of subjectivity. There have been similar initiatives to address the issue of quality in interlingual subtitling (Nikolić, 2021, Pedersen, 2017Robert & Remael, 2016), intralingual live subtitling (Romero-Fresco & Martínez Pérez, 2015) and interlingual live subtitling (Robert & Remael, 2017;Romero-Fresco & Pöchhacker, 2017). Among the existing models, the FAR model (Pedersen, 2017) proposes an error-based assessment method based on: Functional equivalence (semantics, style), Acceptability (grammar, spelling, idiomaticity), and Readability (segmentation and spotting, reading speed, line length punctuation, use of italics).…”
Section: Quantifying Qualitymentioning
confidence: 99%
See 1 more Smart Citation
“…Whether quality can be quantified in the first place can be debatable, of course, since it necessarily implies a certain degree of subjectivity. There have been similar initiatives to address the issue of quality in interlingual subtitling (Nikolić, 2021, Pedersen, 2017Robert & Remael, 2016), intralingual live subtitling (Romero-Fresco & Martínez Pérez, 2015) and interlingual live subtitling (Robert & Remael, 2017;Romero-Fresco & Pöchhacker, 2017). Among the existing models, the FAR model (Pedersen, 2017) proposes an error-based assessment method based on: Functional equivalence (semantics, style), Acceptability (grammar, spelling, idiomaticity), and Readability (segmentation and spotting, reading speed, line length punctuation, use of italics).…”
Section: Quantifying Qualitymentioning
confidence: 99%
“…
Quality assessment in the field of Audiovisual Translation (AVT) has been addressed by several scholars, particularly in relation to interlingual subtitling (Pedersen, 2017;Robert & Remael, 2016), intralingual live subtitling (Romero-Fresco & Martínez Pérez, 2015) and interlingual live subtitling (Robert & Remael, 2017;Romero-Fresco & Pöchhacker, 2017), but to-date no model in relation to dubbing has been proposed. As with other AVT modes, the need for a quality assessment method in dubbing arises in academic and in-house training contexts.
…”
mentioning
confidence: 99%
“…Some studies have already attempted to explore the equivalence of the subtitling arena. Still, most studies in the field of subtitling have only focused on addressing "normal audiences," not particular audiences and their translation technique (Ohene-Djan, Wright & Smith, 2007;Pedersen, 2017;Robert & Remael, 2017;Hudi, Hartono & Yulisari, 2020;Budiana, Sutopo & Rukmini, 2017;Supardi & Putri, 2018, Aminudin & Hidayati, 2021. Pedersen (2017) assessed the quality of subtitling in terms of functional equivalence, acceptability, and readability.…”
Section: Introductionmentioning
confidence: 99%