How to Set Up a Linguistic Feedback Loop That Actually Works
Share
Click here to close
Click here to close
Subscribe here

How to Set Up a Linguistic Feedback Loop That Actually Works

How to Set Up a Linguistic Feedback Loop That Actually Works

Feedback

It’s pretty frustrating to keep getting reports of the same types of errors occurring over and over again — especially when you’ve got a review process in place. If you don’t figure out what’s going wrong, whoever is reporting repetitive errors is going to get discouraged and stop trying to help.

There are several common points of failure in a linguistic review program, and they all boil down to the same thing: communication. The good news is that the steps in an effective linguistic review loop are a well-worn footpath, not a hike through the rainforest.

Setting up a Linguistic Feedback Loop

For starters, every effective feedback loop involves two key roles: translator and reviewer. Both roles require bilingual, in-country resources, and they both need reference and training on the client’s quality standards and style. They also need direct access to each other to eliminate ambiguity: don’t play the telephone game. Then, after translation, here are all the steps in effective linguistic feedback:

  1. A reviewer reviews all or a subset of a translation, such as a set percentage or a specific set of languages. This can happen at various stages before, during, or after release. They review against the source language and against an agreed scorecard, which classifies errors based on type and severity
    • A review that is completed post-release is called a diagnostic review.
    • A midpoint review can be used as a check to make sure all translators are producing consistent and coordinated content.

  2. The completed scorecard is provided to the original translator, who reviews and responds to the “errors” in the scorecard. “Preferential changes” are not included in the error count and are often implemented as “suggestions for improvement” — especially in marketing content. They can also represent items that should be included in a revised style guide.
  3. The scorecard is then routed back to the reviewer for approval or questions.
  4. In case of disagreement, you may need an arbitrator: usually a language moderator, a linguist specified for that project and language working closely with the team to resolve queries and move the quality process along.
  5. The translator implements all changes.
  6. Optionally, a reviewer or QA person can verify the implementations.
  7. The Translation Memory is updated.
  8. Any glossary terms identified or corrected are updated in the termbase but carefully, because the “old” term may be present in other deliverables. Glossary updates can happen continuously after translation or periodically as appropriate.
  9. Translator instructions and trainings may be updated, based on the quality data.

A Bit More on Scorecards

A scorecard objectively documents and clearly describes your quality target — as well as can be expected when “quality” is intrinsically vague. A feedback loop isn’t much good without a clear scorecard: it is the medium through which the errors are caught and fixed. It is the communication device. Moreover, it represents your project’s unique and precise approach to balancing cost, speed, and quality carefully tuned for the type of content, medium, and audience. In other words, don’t adopt a generic template  work with your vendor to develop a customized scorecard.

In a good scorecard, errors are indexed by severity and type:

  • Severity: critical, major, or minor
  • Type: accuracy (fidelity to source), language quality, terminology compliance, country standards, language quality, and style guide/project guideline adherence

The errors are weighted. For example, a serious accuracy error would get the highest number of negative points.

Skip Steps at Your Own Risk

If you think this process takes a lot of time, then you are right. Review can occur at a throughput as fast as 2000 words per hour to 500 words per hour for highly technical, dense content. (The latter type may require a review by a linguist who is also a content subject matter expert.)

There are two ways you can speed up review: engage more than one reviewer or reduce the amount of content to review.

But the reality is, this process doesn’t always happen as proscribed. If there is not time to complete all the steps, you have to prepare to accept the impact this may have on quality. The process I outline above will help you get the most from your review program.

Other Benefits

Eliminating repeat errors is only one reason to formally track and share the linguistic feedback. This process provides:

  • Proof that the review happened
  • Training of the translator. What is the good of tracking linguistic feedback, but not providing it to the translators so they can improve?
  • A better understanding of quality trends that may be occurring

 

How do you make sure the errors caught in your translations get fixed and don’t happen again? Share your thoughts below.

 

Many thanks to Moravia’s Marta Kapinusova for her keen insight on the quality process!

 

Comments