Post Editing of the Future: An Interview with Félix do Carmo

Post Editing of the Future: An Interview with Félix do Carmo

Post Editing of the Future: An Interview with Félix do Carmo

Not long ago, machine translation post-editing (MT PE) was a peripheral activity, reserved for enterprises with a penchant for pioneering whose project requirements were conducive to a “just right” mix of quality, cost and speed. Often, the economics of post-editing—the need to recruit, train and pay translators to correct machine output—shifted the scales back toward the traditional approach of human translators using classic productivity tools like translation memory (TM). The numbers just didn’t support MT PE as a viable option.

How things have changed! In late 2016, Google announced their Google Neural Machine Translation system (GNMT) which promised a quantum leap in the efficacy of MT output. This announcement kicked off a series of “space races” between MT developers looking to build the highest-performing engines, and language service providers looking to leverage MT into their clients’ real-world language programs.

While no engine or MT process is quite good enough to be trusted in all situations without some level of human supervision, MT’s evolutionary leap has made it possible to apply translation in cases that would otherwise be prohibitive—for content like customer feedback and in places like support forums.

And so, the demand for post-editing has boomed. Even if you reject the “post-editing” nomenclature, it is now indisputably a key activity of the language services industry.

It would behoove us, therefore, to look closely and critically at how post-editing gets done. Are our processes effective? Efficient? Are post-editors empowered with the right technology? Are they incentivized to practice desirable behaviors? What would an optimized process look like, and what should we do to get there from here?

These are some of the questions that I discussed with Félix do Carmo, post-doctoral research fellow at the ADAPT Research Centre and head of the KAITER project on interactive post-editing.

Felix do Carmo

JIM: Felix, could you explain the KAITER project to us?

FÉLIX: The main purpose is to develop tools to help translators post-edit with more support—not only post-editing, but also to help with translating and making revisions. The idea is that the main technical task that translators do when they are post-editing is editing, which is composed of very small actions performed on text units: deleting, inserting, moving and replacing words or groups of words. I believe we need to study the patterns of how translators use these actions, and then assist them with tools that enable better quality and higher productivity.

KAITER stands for Knowledge-Assisted Interactive Translation Editing and Revision.

JIM: What led you to this area of research?

FÉLIX: I’ve been a translator since 1994 and have a small translation company in Portugal. When working with CAT tools, we became frustrated by technology that feeds you with information but doesn’t present you with a better way to make use of it. When we started post-editing machine-translated content, it was frustrating to have to repeat actions, replacing the same words over and over, and to not have an intelligent way of interacting with the tools used in the machine translation process.

JIM: In general, how effectively is post-editing being practiced today?

FÉLIX: Post-editing suffers from everything that translation processes are suffering from. Translators work with very segmented content without enough context to make decisions, and with restrictions like length and strict guidelines that change from project to project.

Post-editing came along as a new process, but it isn’t an improvement in terms of context and decision support. For example, if you are translating software and you need to decide whether to use a specific word, it doesn’t really make a difference if you’re working in a translation project or a post-editing one.

On top of that, post-editing has traditionally been associated with low-value data such as user-generated content and “good enough” quality, so the decision process for translators has been overly conditioned by project restrictions.

In terms of the tools, I believe that we need to not only study the decision process that translators apply when they’re doing post-editing, but understand which information they use to make decisions and support that.

For example, if we identify a specific text unit that translators are always researching, and specific websites that give them most of the answers they’re looking for, maybe we can automate that connection or make some link between the problems that they’re facing and the available information that can help them.

JIM: Do you see the rise of neural MT and the rise of hybrid MT methods changing the nature of the problem that you’re trying to solve?

FÉLIX: The fact that statistical machine translation breaks sentences into phrases and is able to give suggestions for phrases makes for a very good interactive element. But in neural machine translation, the results seem to be more fluent. The sentences seem to be more cohesive but are not so easy to break into phrases.

I don’t think there has been enough research on how statistical versus neural output affects consistency within projects and how decisions that depend on the previous segments differ between the two.

You may have a sequence of ten sentences which are perfect, but because they’re included in a project in which the translator has to use a specific set of terminology, they may still have to be edited. We need to analyze that in terms of what we give to translators for each project.

It’s not so clear whether there’s less effort because we’re providing better outputs to post-editors.

JIM: Do you believe that we are properly incentivizing post-editors to engage in the right behavior? Or is there a better model out there?

FÉLIX: I don’t think that we are using the right attitude to attract translators to do post-editing. Especially translators who have been in the business for a long time and know how to deal with complex issues, who are experienced enough to work with new content and can adapt it according to the expectations of the client. But I would say that translators don’t offer as much resistance to post-editing as people usually say.

Post-editing should be seen as a task that empowers translators to do better work. And sometimes because we reduce prices, because we tend to give this guideline that says, “don’t correct too much,” we’re giving translators the idea that they don’t get to decide what’s best. It’s not just a question of giving the correct remuneration, but about enabling translators to make decisions in a complex environment.

JIM: Would interactive post-editing tools change that?

FÉLIX: Interactive post-editing is essential when you have good quality machine translation output that requires editing, instead of full rewriting of the translation.

Most interactive systems available ask the translator to generate the translation in their head, and then intervene in the writing process, sometimes helping, sometimes interfering with it. An interactive post-editing system presents a full translation suggestion, but then it uses the components that created that suggestion to assist the editing decision process by analyzing the probabilities of requiring a word to be moved to a different position, replacing it with alternatives and so on.

Systems that interact this way with post-editors help them make better decisions because the process builds on locally acquired knowledge. I believe that this would help us all acknowledge how this decision process requires specialized translators, which would naturally reflect on how rewarding the task is seen by those who perform it.

JIM: Let’s pretend that all the objectives of the KAITER project have been fulfilled and post-editing as a process is optimized. What does the experience now look like?

FÉLIX: What I would like to see as a post-editor is an interface which at first glance is very simple, in which I can read through the source text and the target text. When I think something might need to be corrected, I can click and get more information about how the content got there. If I want to dig deeper into that, I will be able to see who translated it, if it was a machine translation or from a translation memory, if certain words are translated consistently in the projects of that client, if there are instructions which specifically say how to deal with that type of sentence, etc.

And if I think, “okay, I need to change this,” I would like to have some guidance on how to research that; for example, suggestions about what sites yield the best solutions in similar situations. I’d also like to have suggestions from the translation tool itself on how to correct the sentence, some indication that specific words usually get deleted. Or if I want to change two or three words, I’d like to be shown the best alternatives for this context.

I can see this in the near future, and I’d like to contribute to it.

JIM: Any parting thoughts?

FÉLIX: I think that the future is exciting. And although people say that most translators criticize post-editing, I’m sure that when they see a good project with enough support and understand that we are improving our processes, translators will really want to work in those conditions. But when they see that everybody is saying, “okay, this is too easy, you don’t need much time and we’re going to pay less for this,” nobody really likes to work in a world that says what they’re doing is not as valuable as it used to be.

 

As more and more enterprises deploy machine translation in their localization programs, this is a timely and relevant initiative. What do you think? Do you have a vision of what an optimized post-editing process would look like?

 

Comments