Leaving quality reviews to the end game is a risky move, and that’s why localization best practices call for multiple checks throughout the process.
Computer-aided translation (CAT) tools offer automated checks that accelerate the path to consistency and quality, but too many automated checks throughout the translation process can generate more lag than they’re worth. Let’s explore how that happens.
Most notably, automated checks produce a log of potential issues, which a linguist must review to isolate the legitimate issues from the false positives — i.e., context-appropriate exceptions to the precise rule the automated check is trying to enforce.
Beware of sharing the output of automated checks with anyone who doesn’t understand this, or they’ll misread a long list of false positives as a sign of poor translation quality. Quite the contrary, a report full of false positives and zero legitimate issues is a measure of excellence.
But if you’re running that check several times throughout the process and never finding a legitimate issue, you’re wasting a lot of time — especially when you multiply each check by the number of target languages.
Ensuring terms are translated correctly sounds like a no-brainer, but it’s not always a top priority. If you’re translating user-assistance material and the user manual references the “Settings” screen, but the device only has a “Configurations” screen, that’s a real problem. But strict adherence to a term base can produce content that sounds like a robot, and that’s not helpful with marketing content, blogs, fiction, or other tone-sensitive content that benefits from a little more artistic license.
Is there a period in the source string that is missing in the target? Are there two spaces instead of one? How crucial is it to find and correct these issues at every checkpoint? Given that English uses a comma in 1,500.00 where other languages would use 1.500.00, is it worth it to slog through 72 false positives in pursuit of one actual missing period? It really depends on the project.
A segment that appears exactly the same in the source and the translation may be a case of a translator skipping over a string and leaving it untranslated, and that’s definitely worth checking for. But if the source content has a lot of proper names, this check may generate a lot of false positives.
If you’re translating strings inside a file that needs to be reimported into your content management system (CMS), you want pristine formatting and encoding so the file doesn’t break the software or cause display issues in the UI.
Technical checks catch compromised code, character limitations, encoding issues, and the like. They’re important when the localized files are going to be used as an “input” in the next step of the process, but might not be important for workflows that will have additional built-in checks, such as desktop publishing or multimedia engineering.
CAT Tool Checks
All CAT tools (whether licensed or proprietary) include built-in checks that are generally very easy for the translator to run prior to delivery, as well as the localization engineer who is managing the post-processing. But the CAT tool cannot check for errors introduced by the CAT tool itself: sometimes the translation will look fine in the CAT environment, but exporting to Microsoft Word reveals an encoding error. It’s a safe bet to include additional checks outside the CAT environment.
Successful localization programs develop their own best practices around the effective use of automated checks because each project needs its own approach that works within the constraints of timeline and budget.
That’s why it’s crucial to discuss project requirements candidly with an experienced localization project manager before the project gets underway. At the very least, make sure you build in enough time at the end of the localization cycle to run final checks and fix any remaining errors.