Guide to evaluating machine translation software

Guide to Evaluating Machine Translation Software - RWS

Although machine translation software will dramatically increase the speed and efficiency of your translation activity supporting investigations, it represents a significant investment for most agencies, so it is something you want to get right. 

Machine translation performance is often boiled down to BLEU or LEPOR scores, but our extensive guide shows how these models, designed for use in the development of machine language translation systems, are being misused and can be misleading when used on their own to assess MT performance. 

“Many buyers today realize that system performance for machine translations on their specific subject domains and translatable content for different use cases matters much more than how generic systems might perform on news stories.” 

Our in-depth guide to machine translation tools and technology sets out a more holistic approach to evaluation – one that takes into account the real-world practicalities of how the software is used by staff, its adaptability to the organization’s specific content and terminology, and why data and security should never be overlooked. 

It also includes an MT evaluation checklist so you can assess machine translation providers across multiple criteria to see if they are offering an MT solution that will work well for your agency’s unique use cases.

Please fill in your details below to download the white paper