Issue #115 - Revisiting Low-Resource Neural Machine Translation: A Case Study

Akshai Ramesh 28 Jan 2021
The topic of this blog post is low resourced languages.

Introduction 

Although deep neu­ral models produce state­-of­-the-­art results in many translation tasks, they are found to under­perform phrase-based statistical machine translation in resource ­poor conditions. The majority of research on low-resource neural machine translation (NMT) focuses on the exploitation of monolingual or parallel data involving other language pairs. There is notably less attention into the research of low-resource NMT without the use of auxiliary data.

In today’s blog post, we will look at the work of Sennrich and Zhang, 2019 that comes from the University of Edinburgh. This paper investigates the best practices for low-resource recurrent NMT models and shows that more efficient use of small amounts of parallel training data can result in strong improvements across different data settings without relying on auxiliary resources.NMT 115 Figure 1 German to English learning curve Fig 1: German→English learning curve, showing BLEU as a function of the amount of parallel training data, for PBSMT and NMT.

In Koehn and Knowles (2017), they show that it takes in the order of millions of sentences in the training data for an NMT system to outperform the PBSMT system. Their NMT systems are trained with the same hyperparameter configuration as used for the high-resource scenarios and the hyperparameters are not tuned for the low resource conditions. Sennrich and Zhang (2019) re-visit this scenario and show that following a set of best practices and tuning the hyperparameters to the low-resource setting, the optimised NMT system can outperform the statistical models with as little training data as 105 words.

Methods to optimise NMT in low-resource setting 

  1. Mainstream Improvements: This includes common architectural changes like adding dropout, tied embeddings, layer-normalisation, label smoothing, learning-rate, and bidirectional RNN.
  2. Reduce BPE Vocabulary size: In a low-resource setting, a large BPE vocabulary will lead to low-frequency subwords being represented as atomic units, limiting good high-dimensional representation. So they reduce the BPE vocabulary based on a subword frequency threshold as suggested in Sennrich et al. (2017a). (Here 14k to 2k subword units)
  3. Hyperparameter Exploration: Reducing the batch-size, number of layers and introducing aggressive dropout (word).

Experiments

In this paper, the authors present their work on 2 language pairs - German→English (full training data of 159k parallel segments with the smallest subset of 5k parallel segments) and Korean→English (training data of 90k parallel segments). The experimentation is carried out on different smaller subsets of the German-English training data starting from 5k segments and all the way to whole training data of 160k segments. NMT 115 Table 1 Training corpus size and subword vocabulary size Moses (Koehn et al., 2007) is used to train a PBSMT system. The NMT systems are trained with Nematus (Sennrich et al., 2017b). For the purpose of comparison, the hyperparameters from Koehn and Knowles (2017) are used to train the baseline NMT system. The methods discussed in the previous section are subsequently added to the baseline system with the objective to optimise the NMT system and outperform the PBSMT system without using any auxiliary data.

Evaluation Results & Observations

Table 2 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). The mainstream improvement method leads to a 6-7 increase in BLEU points across both corpus-sizes. NMT 115 Table 2 German to English IWSLT results In the ultra-low data condition, shrinking the BPE vocabulary leads to a +4.9 BLEU. The use of smaller mini-batches helps to achieve a +0.3 BLEU. Adding aggressive word dropout seems to be very effective and yields a +3.5 BLEU improvement. Additionally, using fewer layers and other hyperparameters tuning (learning rate, label smoothing rate, etc.) results in the optimised NMT system outperforming the PBSMT system. Also, in the Korean→English language direction, the optimised NMT system shows strong improvements and outperforms the PBSMT system across all data settings.

Other Observations 

  • NMT baseline (0 BLEU) unable to learn from 5,000 parallel segments, whereas tuning the hyperparameters following the best practices leads to the optimised NMT resulting in 16.6 BLEU.
  • Reducing the BPE vocabulary size for German→English full training data doesn’t result in significant improvement.
  • Aggressive dropout and other hyperparameters tuning has a stronger impact than the lexical model and adding a lexical model on top of the optimised configuration doesn’t improve the performance.

In Summary
Most research on low-resource NMT focuses on using auxiliary data to improve the system performance but Sennrich and Zhang (2019) show that low-resource NMT is very sensitive to hyperparameters and proper tuning of these hyperparameters can lead to stronger baselines. They also carry out best practices for low-resource recurrent NMT models and also show that an optimised NMT system can show strong improvements, and outperform the PBSMT system across all data settings.

Akshai Ramesh
Author

Akshai Ramesh

Machine Translation Scientist
Akshai is responsible for developing end-to-end enterprise solutions for real-world MT problems and contributes to the wide spectrum of innovations that serve our clients. Akshai has experience working in the area of Natural Language Processing and Machine Learning and has completed his Master's in Computing from Dublin City University, Ireland. During his Master's, he has worked with ADAPT Research Centre and contributed to the low-resource scenario of Indian languages in Neural Machine Translation. His research interests include low resource scenario, model improvement and domain-adaptation of MT.
All from Akshai Ramesh