How to Set Up a Linguistic Quality Feedback Loop That Actually Works
Click here to close
Click here to close
Subscribe here

How to Set Up a Linguistic Quality Feedback Loop That Actually Works

How to Set Up a Linguistic Quality Feedback Loop That Actually Works

It’s pretty frustrating to get reports of the same types of translation errors over and over again—especially with what seems to be a solid linguistic quality assurance (LQA) process in place. If you don’t figure out what’s going wrong, whoever reports those repetitive errors might get discouraged and stop trying to help.

And that could be part of the problem. All common points of failure in LQA programs boil down to the same thing: communication. The good news is that the steps in an effective linguistic QA feedback loop follow a well-worn footpath, not a hike through the rain forest. Ready to set off?

Setting up an LQA feedback loop

Besides an (optional) arbitrator, every effective feedback loop involves two key roles: translator and reviewer. Both roles require bilingual, in-country resources, and both need reference materials and training on the client’s quality standards and style. They also need direct access to each other to eliminate lag and communication issues.

With all that in place, here’s what their feedback loop should look like:

  1. The reviewer checks all or a subset of a translation against the source language and scorecards (more on those below). This can happen at various stages before, during or after release (the latter known as diagnostic review). For example, midpoint reviews can be used as a check to make sure all translators are producing consistent and coordinated content.
  2. The reviewer hands off the completed scorecard to the original translator, who assesses and responds to the errors recorded. Preferential changes are not included in the error count and are often implemented as suggestions for improvement—especially in marketing content. They can also point out additional items that should be added to the style guide.
  3. The translator routes the scorecard back to the reviewer for approval or to answer their questions.
  4. In case of disagreement, an arbitrator might need to step in. This is usually a linguist knowledgeable in that project and language who can work closely with the team to resolve queries and move the language quality assurance process along.
  5. The translator implements all changes.
  6. Optionally, an in-country reviewer or QA person can verify the implementations.
  7. The Language Lead updates the Translation Memory with the final translations.
  8. The Language Lead also updates the termbase with any glossary terms that were newly identified or corrected—but carefully, because the old term might be present in other deliverables. Glossary updates can happen continuously after translation or periodically as appropriate.
  9. Lastly, the Language Lead updates translator instructions and trainings according to LQA data.

A note on scorecards

Scorecards should objectively document and clearly describe your quality target—as well as can be expected when “quality” is intrinsically vague. To do this, good scorecards classify errors by: 

  • Severity, which can be critical, major or minor, and by
  • Type, meaning accuracy (fidelity to the source), language quality, terminology and in-country standards compliance and adherence to project and style guidelines.

As Yin is to Yang, a feedback loop isn’t helpful without a clear scorecard: it is the medium through which errors are caught and fixed. In other words, it is the communication device. Moreover, it represents your project’s unique and precise approach to balancing cost, speed and quality and is carefully tuned for the type of content, medium and audience.

That said, don’t use a generic template. Work with your LSP to develop a customized scorecard. 

Final thoughts: skip steps at your own risk

If the above process looks like a big time investment, that’s because it is. Review can occur at a throughput of 2,000 words per hour down to 500 words per hour for highly technical, dense content. (The latter content type may require a review by a linguist who is also a subject matter expert.)

There are two ways you can speed up linguistic quality assurance: engage more than one reviewer or reduce the amount of content to review.

The reality is, neither are always possible. But the process outlined above offers you the best chance of getting the most from your LQA program. Eliminating repeat errors isn’t the only benefit; ultimately, regular feedback helps clarify and improve the entire LQA process for both linguistic roles (sometimes the reviewer needs feedback, too). So, if you find yourself strapped for time to complete each step, look to your LSP for support. We promise you won’t regret investing effort today to save time down the line.


How do you make sure the errors caught in your translations never see the light of day again? Get in touch with your thoughts.