If you caught my colleague Jon Ritzdorf’s webinar, What Blows Our Minds About the Latest Translation Tech, you may recall that one of the things Jon was excited about was the melding of TM (translation memory) & MT (machine translation) technologies, specifically in the forms of Predictive TM and Adaptive MT.
He’s not alone. Many of us have been intrigued by this model whereby a translator is offered real-time suggestions from the system—and the system in turn learns from their edits immediately. For a good introduction to Interactive Translation models, in addition to Jon’s webinar I’d recommend this article, Beyond Post Editing: Advances in Interactive Translation Environments.
Integrated TM+MT features are showcased in translation platforms such as Lilt, MateCAT, Tolq, and SDL Trados. And at localization industry events such as TAUS and LocWorld, the concept has a buzz factor that’s up there with neural MT.
What is it about this approach that’s captured the industry’s attention? A hypothesis:
It’s not (only) about increasing translation productivity. It’s about the evolving relationship between humans and technology. We like it because it’s evidence in support of the position that technology should serve humans, versus the position that technology competes with humans.
Relationship status: it’s (historically been) complicated
1812 engraving depicting Luddites smashing a loom. Source: Wikipedia
Humanity’s tempestuous relationship with technology goes way back. Like the Gartner “Hype Cycle” model, history is filled with examples of the repeating pattern where a technological advancement goes from being exciting to threatening, to ultimately taken for granted. Think Industrial Revolution and, on a smaller scale, MP3s.
Next time you’re in your local bookstore, walk through the science fiction section and count how many titles are related to “technology as an existential threat.” It’s an anxiety in our collective psyche, and it’s understandable. If technology makes what we’re doing less relevant or valuable, it is in fact a competitive threat.
In today’s localization industry, that collective anxiety has been focused on machine translation. As machine translation technology becomes demonstrably more viable for general use, a narrative has arisen that says, “If you’re a professional translator, get used to being a post-editor instead, because that’s your future.”
It’s been an unwelcome message for many, evoking phobias of a dystopian future (think Terry Gilliam’s Brazil) where human labor is relegated to the tedious and uncreative task of fixing the errors of a machine that’s just not quite good enough.
Humans in service to technology.
So, when examples of adaptive MT started to surface, folks became excited. By embedding MT within a “TM-like” transaction, it showed an alternative model where machine translation looked more like a tool for translators.
Technology in service to humans.
Designing better human <-> technology relationships
Robot love. Source: Archive.org
The Adaptive MT model is interesting to me for its potential to positively influence the translation industry and help normalize the use of MT. I expect that as translators directly experience personal productivity improvements through MT, they’ll want more of it, and this will create a pull effect on MT demand.
But more than that, to me it’s also an example of the massive importance of human <-> technology interaction design. It shows that the difference between an experience that feels grueling versus one that feels empowering can be based on nuanced differences in the user interaction model.
Yet there seem to be a lot of poorly-designed human <-> technology interactions out there. How much do those cost us? Not only in lost efficiency, but in the harder-to-measure cost of demoralization and resistance to technology adoption?
It makes me think about what other language technologies could be more widely applied, and made more universally useful through rethinking the interaction model.
For example, what if we take those QA technologies that are typically invoked only during late-in-process LQA steps, and surface them dynamically, upstream, to the folks who are doing the translation? Or, what if we take the process and translation performance data that we’re generating today and try to convert it into useful actionable intelligence for the translator? Wouldn’t it be useful to know, for example, that the translation you’re currently creating is similar to a translation that was heavily edited last time? Or to have visibility into how your translation “performed” out in the real world?
A final thought. As a pro-technologist, interactive translation innovations make me optimistic that technologies can go from being perceived as competitive threats to being empowering assets for translators. I like the idea that not only can it happen, but it can be made to happen through willful innovation and conscientious design.
What’s your relationship with language technology? Does it give you anxiety? Does it excite you? Something else entirely?
I’d love to hear your input. Let’s start a conversation!