Managing gender bias in machine translation

Pablo Pérez Piñeiro 27 Aug 2021 6 mins
Have you ever wondered why certain words are sometimes translated as feminine or masculine in different languages when processing content with machine translation? Is the tool making a choice during the translation process, or does this go beyond the ability of what a machine is able to do?
 
In this article we will explore the subject of gender bias in the context of machine translation — when it occurs, how it originates and what features are available in Language Weaver to address it.

What is gender bias and why does it occur?

Gender bias refers to the gender choices made in a translation that are not present in the source text. This typically happens when translating from a gender-neutral language (a language without grammatical gender, like English, Turkish or Finnish) into a gender-marked one (a language where certain words do have a gender, usually masculine, feminine or neuter, like Spanish, German or Arabic).
 
For example, if a translator had to provide a literal translation of the English pangram The quick brown fox jumps over the lazy dog into Spanish, they would need to decide whether the fox and the dog are male or female. La zorra and la perra would be used for the feminine version, while el zorro and el perro would be used for the masculine one. The determiners and adjectives would also change depending on the choice.
 
Though choosing one option or the other may be irrelevant in this specific case, the situation could be different when translating other types of content. Should patient, for example, be translated as masculine or feminine in a gender-marked language? What about secretary or judge? Making the correct decision is key to avoid ending up with a translation that can be perceived as awkward or even sexist.
 
Most of the time, a human translator would be able to solve such situations without much effort relying on the context or selecting a gender-neutral term for the translation. When we switch to machine translation, however, this may be less straightforward, especially if the context is ambiguous or non-existent. But to understand why this is the case, we first need to understand how machine translation works.
 
Broadly speaking, machine translation models are initially created empty and then, bilingual corpora consisting of aligned source and target sentences are imported into them (a process known as training). The model will therefore "learn" to translate from the source into the target language using the training corpus as a reference, so the quality of the training material, together with the work and experience of the computational linguists, is crucial in order to produce good translations.
 
It is easy to see the analogy between the learning process of a machine translation model and that of a human being, though there is one fundamental difference — the knowledge of the external world. While a human can apply this background knowledge together with their common sense and ethical principles when working on a translation, a machine translation tool is limited exclusively to the corpus used during its creation. Hence, the importance of the quality of the training material. The fluency, accuracy and grammatical correctness of the training corpus will be reflected in the resulting translations, but so will be errors found in the training material. Let’s say the machine translation includes a typo — this doesn’t mean that the tool "has made a mistake" when producing the translation, but that it has learned the wrong spelling from the material it was trained with. And this brings us back to the subject of gender bias.
 
When processing content with machine translation, some results may show signs of gender bias, like some professions being repeatedly translated as either masculine or feminine. Does this mean that the tool is following stereotypical patterns or even acting in a sexist way? Definitely not. It is just producing translations based on what it has learned.
 
An important part of the corpora used to train machine translation models comes from publicly available sources. When a tool produces a gender-biased translation, it does not decide to favour one gender over the other, but replicates the bias that was already present in the training corpus. So when doctor and nurse are recurrently translated as masculine and feminine respectively in gender-marked languages, the reason is that this is how they were already translated in the majority of the examples used to train the machine translation model. 

Can gender bias be avoided at all?

As we have seen, the machine translation models learn from the corpora imported into them. If the bias in the machine translation output comes from the corpora, would it then not just be a matter of removing these parts of the corpus from the training material? Strictly speaking, yes, but unfortunately, the solution is not that simple.
 
The corpora used for the training process usually consist of hundreds of millions of sentences, which in turn can contain some types of bias. When preparing a training corpus for MT, many of the steps are highly automated, like data collection or sentence alignment. A complete removal of the potential gender bias in the corpus would require multiple human translators reviewing and amending the data. Given the huge amounts of data involved, this would be a remarkably difficult and time-consuming task.
 
Aware of this situation, some tools offer two versions of a translation, one in feminine and another one in masculine, so the user can choose their preferred option. However, this is not an entirely convenient approach, as it works with individual sentences only, not with full paragraphs or documents. It would also add time during the post-editing process, as the translator would first need to choose the version they would prefer to work with.
 
Is gender bias, then, something that cannot be solved at all? Not quite. Some of the content used to train machine translation models was created at a time when as a society we were not as conscious of gender bias as we are today. Nowadays, we are much more sensitive to this topic and this is reflected in the way that content is created. As this fairer, less biased content becomes publicly available, it will be added to the machine translation solutions, which will then produce more inclusive translations. Of course, this is a long-term approach, so what can we do in the meantime to combat gender bias? The use of cutting-edge technologies becomes crucial in addressing this issue.

How does Language Weaver address gender bias?

Since its inception in 2002, Language Weaver has dedicated both time and resources to carefully select and prepare the content used to train its machine translation models.
Equally important has been the development of new technologies that our customers can benefit from directly through the Language Weaver translation portal. There are a number of existing features, such as Real Time Adaptation and Language Pair Adaptation, that allow you to customize your translations to produce results tailored to your content. With these tools, not only can you improve terminology, consistency, and increase accuracy, you can also implement specific changes to reduce gender bias, resulting in an overall improvement of translation output quality.
 
Real Time Adaptation allows users to rate a machine translation output and suggest an alternative translation. This new suggested translation is stored in the system. Once approved by the account administrator, it will be used on top of the default output. This way, the next time the same sentence comes up, you will benefit from the modified translation.
Language Pair Adaptation allows retraining an existing language pair with your own corpus coming from a translation memory. This allows you to customize your machine translation models with the content you post-edit, producing results that are tailored to your needs.
 
Our scientific team have been working on more organic approaches to reducing gender bias using techniques based on domain adaptation. Keep your eye on our Neural MT blog next week where we’ll be publishing a companion piece to this article that dives deeper on the technical detail and shares some of the promising results we’ve seen. 
 
Watch our MT and AI Innovation at Language Weaver webinar if you are interested in the AI-based features available in Language Weaver to improve your translations. You can also get in contact with us to find out how we can help your organization with the adoption of a successful machine translation strategy.
Pablo Pérez Piñeiro
Author

Pablo Pérez Piñeiro

Principal Linguistic AI Consultant
Pablo started his career as a sworn translator. Following his interest in computational linguistics, he then moved to Localization Engineering, working closely with Production teams, salespeople and customers. In 2020 he joined the Linguistic AI team, focusing on the operational side of the business.
All from Pablo Pérez Piñeiro