Machine Translation evaluation
I am about to start using neural Machine Translation for some of my help content. I would like to assess the engine and this is the test I came up with:
1. Compare the following pieces of translated documents by 2 different reviewers, they won't know how the pieces have been translated:
a. Human translation
b. Raw Machine Translation
c. Light Post-Edited Machine Translation.
The comparison would be rated with standard measures of quality.
I have a few questions for the community:
- Have you rated MT translation quality before and what have you learned or what do you suggest?
- How do you determine what is "light" Post Editing for post-editors, versus, heavy post-editing?
- How would you improve this approach, taking into account that I have very limited resources, time and budget for quality assessment of the MT engine?
Thanks so much!