The approach we have taken with automatic post-editing has a few interesting advantages:
- All translation work is conducted using the dedicated enterprise-grade NMT models which are optimized for high-quality and high-scale applications, while offering reasonable compute requirements and low total costs of ownership. This technology has been successfully used by large user communities and is deployed across hundreds of commercial and public sector clients.
- The quality estimation models have been calibrated using human-labeled examples, using our in-house expert linguistic teams. This allows us to tune the performance of the model and extend coverage to new languages as needed.
- The automated post-editing service utilizes a dedicated, smaller LLM hosted by RWS. This allows us to tune the LLM performance, provide the highest levels of data security, and operate within a predictable cost structure. It is also invulnerable to any 3rd-party API instabilities.
- Building the solution from three separate modules - translation, quality estimation, and post-editing, allows us to tweak not only the individual components but also how they work together. For example, Language Weaver can now iterate the evaluate/edit task loop several times until a desired outcome is achieved. When an edit task is completed, the translation is sent back for quality estimation – if the result is still found inadequate, the sentence is propagated to the post-editing module again. This time, however, the system captures additional context from the source document and uses it to generate a better translation. (So far, our tests have shown that allowing up to three iterations provides the best compromise between quality, speed, and cost for most types of content).
- Automatic post-editing can be used in all use cases where traditional MT is used because it does not change the ways in which translations are consumed by external systems and workflows. Crucially, in the localization use case, where some degree of human intervention may still be required (or mandated, as it’s the case for a lot of regulatory content), Automatic post-editing can seamlessly integrate into current workflows to alleviate the post-editing burden presently shouldered by human linguists.
- Finally, since Language Weaver keeps track of all the automated edits and estimation results, the by-product of the translate/evaluate/edit sequence is a fantastic source of feedback for the translation engine. The auto-adaptable language pairs monitor the incoming edits and automatically update their models to reflect the observed improvements.
Optimizing the task of post-editing is a major opportunity for all involved in the translation process, from enterprise customers, through language service providers, to individual linguists. Using a combination of auto-adaptive MT and LLMs to minimize manual post-editing effort allows limited resources to be prioritized for high value-added activities. It also increases the usefulness of automated translation in use cases with minimal room for human intervention – or where time-to-market or time-to-insight is the primary driver - for example the high-volume use cases related to legal eDiscovery, regulatory compliance, or digital forensics. For localization processes, the solution helps improve the ROI through significant productivity gains. And for organizations that want to benefit from adaptable MT models but cannot because they don’t have enough previously translated material – Language Weaver's Generative Language Pairs is a great option to jump-start their translation process and initiate a virtuous improvement cycle.
Automatic post-editing is available for Language Weaver Cloud, supporting over 20 Generative Language Pairs, with more set to be rolled out regularly.
Ready to enhance your translations with AI-powered post-editing? Contact us to discuss how Language Weaver can meet your needs.

Author
Bart Maczynski
VP of Machine Learning, Solutions Consulting
Bart is VP of Machine Learning at RWS.
