A Lighthearted Look at What Neural Machine Translation Can (But Shouldn’t) Do
Click here to close
Click here to close
Subscribe here

A Lighthearted Look at What Neural Machine Translation Can (But Shouldn’t) Do

A Lighthearted Look at What Neural Machine Translation Can (But Shouldn’t) Do

A Lighthearted Look at What Neural Machine Translation Can Do

Undeniably, the most familiar and accessible Neural MT (NMT) engine is Google Translate. You probably use it quite often. And the results look really good. Sentences read very fluently, without the telltale signs of machine translation. Herein lies the biggest pitfall of NMT—it just looks so darn good.

Alon Lavie, who heads up the Amazon Machine Translation Research and Development Group, said in a recent Globally Speaking podcast, that neural machine translation “makes very, very strange types of mistakes … because it’s not a direct matching between the source language and the target language in terms of the words and the sequences of words…” Thing is, the strangeness can get masked by the smoothness.

Looking good at first glance

Recently I found myself feeding this line of text through Google Translate.

For these products, please use 視覚化 not 可視化 based on the definition at the following URL:

It was aimed at linguists, instructing them to use the term shikakuka instead of kashika depending on context. Both words mean “visualization” but have slightly different nuances. And the results were so humorous I thought it would be a shame not to share them.

The neural cogs went whizzing and immediately gave me this:

Google translate result 1

You might expect that the two Japanese terms 視覚化 and 可視化 in the source would make it through to the target text. After all, there’s no need to translate them. But no. Instead, it produced 視覚障害, a big red flag. Just hit the reverse translate button (always a good idea to do that) to see what it means…


Okay, is this even close to what I wanted to say? Avoid visual impairment? Did I want to discriminate against someone? Of course not. Problem is, the Japanese text is so fluent that it reads like I really mean to be really mean.

Alon was absolutely right. Very strange. What is going on inside those neural networks of theirs? Can I pre-edit the source to help the MT to produce a better output? Maybe that unnatural colon at the end of the sentence is wreaking some sort of unexpected havoc? Let’s change it to a period.

Google translate result 2

Nope. It’s still giving us that problematic 視覚障害 but followed by some different wording.


Uh, yeah. So changing two dots ( : ) to one ( . ) takes us from “avoid visual impairment” to “confirm that there is no visual impairment”. Help!

Offering a helping hand

Okay, so obviously the MT engine is getting confused by the Japanese text within the English source, so marking those Japanese terms in a way that sets them apart from the rest might help, right? How about putting them in brackets:

Google translate result 3

Great, no more 視覚障害! Now 視覚化 comes through unscathed as 視覚化, and 可視化 comes through as…a second 視覚化.  What gives? So is NMT actually translating both Japanese terms into the same English “visualization” first before translating back into Japanese, or what?


But aside from that elephant in the room, the other portions of the translation are nicely done. It correctly recognized that the use of “not” actually means “instead of” in this case. Maybe “definition of the following URL” would have been better translated as “definition at the following URL,” but that seems nitpicky compared to the grand-scale issues we saw in the versions that didnt use brackets. 

Takeaway: In the current implementation of Google NMT, using brackets to set off target language terms embedded within a source text may help NMT to better understand the meaning of the sentence, but those same target language terms (which ideally should be left as-is) may wind up becoming inaccurate on their way from source to target. So watch out!

The case for non-neural?

Come to think of it, maybe phrase-based statistical MT wasn’t so bad after all? I’ve never known it to incorrectly translate a term that was already in the target language to begin with. It would just do a lateral pass and come out the other end. Here are the results we get with a couple of online translators commonly used in Japan. 

First, I went to http://honyaku.yahoo.co.jp/.

Yahoo translate example

It has absolutely no problem keeping the terms 視覚化 and 可視化 as they should be, but the overall translation is essentially impossible to understand. The result fails to 喜ばせる (delight) us at all, despite what it says.

Next, I tried http://www.excite.co.jp/world/english_japanese/.

Excite translate result

Okay, that untranslated “not” in the middle of the translation is never a good sign. And spelling out URL in full in the re-translation field seems like overkill. But if asked to post-edit this output, this one seems roughly on par with the previous Yahoo MT (or more like both are a double bogie).

What to keep in mind about neural

In Part 1 of the podcast about Neural MT, Chris Wendt, the Group Program Manager for Machine Translation at Microsoft, said that when neural and statistical machine translation are compared head-to-head, “statistical would win on accuracy and neural wins on fluency.” In our example, phrase-based does get the two key terms right, while neural messes them up frequently, fluently, with style. 

When humans compare the outputs from phrase-based and neural MT, it’s a natural tendency to think the neural MT version is “better” since the key factor driving our appraisal tends to be fluency over accuracy. Without a doubt, neural MT is a powerful and useful tool. But avoid being lulled into a false sense of security because of that fluency—or you could wind up eloquently embarrassing your readers, your company, and yourself.