6 Ways to Set Up Linguistic Testing of Localized Builds [Cliff Notes]
Share
Click here to close
Click here to close
Subscribe here

6 Ways to Set Up Linguistic Testing of Localized Builds [Cliff Notes]

6 Ways to Set Up Linguistic Testing of Localized Builds [Cliff Notes]

Linguistic Testing OptionsIn this recent blog series on testing I have explained the basic concepts of functional, localization and linguistic testing, all of which are meant to catch any problem that is going to impact the adoption or function of your product in market.

In this blog I will cover what you need to know about accessing your localized web or software deliverables for linguistic testing within the final (or semi-final) builds.

It must be done in-context

The best scenario is that each localization tester gets their respective language version of the tested application installed directly on his or her computer or that they access the staging web server where the pre-release version of the website is deployed.

Sometimes, though, it’s not possible to provide a live running build or give access to a staging server to a linguist because of security reasons. In some cases, remote access through VPN and/or remote desktop is possible but sometimes the tested system even runs on a proprietary device (like a tablet, a wearable device or a gadget) that is inaccessible through remote access.

So how do you do it? Fortunately, there are several ways to make the localized content accessible to the linguists:

Access it

  • Install the software on the linguist’s computer, their own handheld device
  • Give the linguist direct access to the built app, hosted on a staging web server on their side
  • If the software runs on a proprietary device, ship the device with the software installed on it physically to the reviewer. This might be cost prohibitive or not feasible because of security restrictions, time to ship, costs, availability of devices, etc.
  • If the software runs on a PC but cannot be installed outside of the test lab, connect the linguist remotely to a machine with the tested software installed, usually through remote desktop.

Emulate it

  • If the software runs on a device that cannot be provided, provide the linguist with an emulator of that device with the software installed on it.

    For example, for a mobile app, you can simulate many operating systems and devices on your computer using an emulator. The emulator completely mimics the behavior of the real thing; you can install applications on it as you would to a real phone. A tester can then operate it using the mouse instead of fingers.

    Emulators exist for all major platforms and devices, even outside of the mobile and handheld field. For example, Moravia has been testing successive versions of a medical body fluid analyzer software for years by using an emulator. It runs on Windows but the monitor shows what the touch screen of the actual device in a doctor’s office would have on it.

Fake it

  • Take screenshots of all windows, dialog boxes, pages and screens that exist in the application and send them in an archive or upload to a server for the linguist to download and mark up.

    The key in this approach is to take those shots in order and name the files incrementally so that the reviewer feels like he/she is walking through the application and not looking at a bunch of random, scattered screenshots.

    If that sounds like a pain to you it’s because it is: this is the least efficient way to do a linguistic QA; time consuming and manual. However, if your project has enough languages, automating the screenshooting may help reduce the effort.

Your test plan could even potentially involve a combination of the above.

Linguistic testing is very different from both functional and localization testing because it’s the only one where the tester actually reads what’s written on the screen. It occurs last in the process when the application or web (hopefully) no longer contains any functional and localization errors. The only thing left to do is to check for language accuracy and grammar.

However, linguistic testing inside the assembled application is often skipped because of the previous linguistic reviews that happened on translated material way before product build. This is a mistake since that type of linguistic review often happens out of context.

The reviewer doesn’t see how the page or dialog boxes will be assembled. A proper in-context linguistic review reveals any in-context linguistic errors plus gives you the benefit of addressing any additional language/country/cultural issues that may become apparent only after the application or website was actually built. Reading static text files or strings in a CMS just isn’t the same thing!

If you have discovered linguistic errors after release to the market, what impact did that have on your product release?

A special thanks Jiri Machala – a Solutions Architect colleague and testing expert — for reviewing and contributing to this blog post.