Abstract
Abstract
Machine translation post-editing quality evaluation has received relatively little attention in translation
pedagogy to date. It is a time-consuming process that involves the comparison of three texts (source text, machine translation and
student post-edited text) and the systematic identification and correction of students’ edits (or absence thereof) of machine
translation (MT) output. There are as yet no widely available, standardized, user-friendly annotation systems for use in
translator education. In this article, we address this gap by describing the Machine Translation Post-Editing Annotation
System (MTPEAS). MTPEAS includes a taxonomy of seven categories that are presented in easy-to-understand terms:
Value-adding edits, Successful edits, Unnecessary edits, Incomplete edits, Error-introducing edits, Unsuccessful edits, and
Missing edits. We then assess the robustness of the MTPEAS taxonomy in a pilot study of 30 students’ post-edited texts and offer
some preliminary findings on students’ MT error identification and correction skills.
Publisher
John Benjamins Publishing Company
Reference41 articles.
1. Qualitative analysis of post-editing for high quality machine translation;Blain;Proceedings of Machine Translation Summit XIII: Papers,2011
2. Designing a learner translator corpus for training purposes;Castagnoli,2011
3. Approaches to Human and Machine Translation Quality Assessment
4. Translation Methods and Experience: A Comparative Analysis of Human Translation and Post-editing with Students and Professional Translators