On the surface, it may not be obvious how SC 42, the ISO/IEC JTC1 subcommittee focusing on artificial intelligence standards, would be relevant to globalization and localization, but make no mistake: it’s very relevant. As AI continues to mature both technologically and operationally, and as its applications become more commonplace, the standards that emerge will ultimately define the framework by which we conduct much of our business.
Will it be a framework that represents a world of possibility and progress that’s consistent with our industry’s visions and values? Or will it represent a new set of obstacles for which we’ll need to take steps back before we can move forward? Perhaps something else entirely?
When it comes to artificial intelligence and globalization-related industries, we’re at an inflection point where the decisions we make now will have a profound impact on the future, for better or worse. I see relevance to the adage, “the best way to predict the future is to make it,” and there’s at least one opportunity to influence AI’s trajectory right now: active participation with relevant standards bodies.
In the case of AI, that relevant body is ISO/IEC JTC 1/SC 42.
Ess see forty-what now?
If this acronym-heavy string name seems cryptic to you, let me unravel the code.
ISO/IEC JTC 1 is shorthand for the “joint technical committee” (JTC) that represents a collaboration between two historically significant standards bodies: the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) in order to consolidate efforts and prevent standardization initiatives from competing with one another. “SC” means “subcommittee.”
The subcommittees within JTC 1 have worked on some significant stuff, both generally and with specific relevance to the fields of globalization and localization. SC 29 was instrumental in the creation of the MPEG standard, SC 17 developed the standard that allows chips to be integrated into credit and ID cards, SC 2 created the Universal Coded Character Set, and SC 35 focuses on user interfaces of all types—from keyboards to voice commands—and addresses issues of cultural and linguistic adaptability and accessibility.
The SC 42 subcommittee was formed in March of 2018. You can check out their Standards Catalogue to get an idea of their plans and progress.
I learned about its existence in May when Moravia research fellow Dr. David Filip (who I’ve previously interviewed for this blog) invited me to attend a public breakfast / introduction to SC 42 hosted by the ADAPT Research Centre. David is directly involved with this project as the Convener for the study group on trustworthiness (SG 2), and is chair of the Irish national mirror. (I’ll explain later what “national mirror” means.)
Figure 1 – Some of the faces behind the acronym.
What’s at stake for globalization?
Industry practices—whether the result of formal standards or not—and the artifacts they create (tools, technologies, pricing practices, what gets bought and what gets sold) will always have a profound impact on an ecosystem. Little decisions made now will have massive consequences in the future—and subtle oversights can turn into massive inefficiencies or even barriers.
AI, a rapidly maturing technology, is particularly significant to the globalization ecosystem. It already permeates many of our standard practices: translation memory, concordance searching, terminology management, and QA automation—not to mention machine translation. The industry has evolved alongside the evolution of AI, and there’s no reason to believe that will soon change.
It’s also shaping the enterprises we serve. The ways companies create content, campaigns, products, and services are all being influenced by the possibilities of AI. Just consider the chatbot (one of many applications), which has matured to the point of being used to support or even replace human agents. As chatbot practices start to scale globally, will they start to show their flaws and limits? The answer to such a question may largely depend on the underlying standards.
As Filip says, “the chatbot is a good example of a technology that needs to be internationalized, or everyone suffers. Chatbots need to understand reasoning, which is encoded differently in different natural languages. Language has to be addressed on multiple levels, from data collection through user interaction.”
Here are three areas where AI standards have relevance to our industry.
1. Ensuring that AI practices support multilingualism and multiculturalism
We’ve seen this play out before in the globalization industry. If you’ve ever had to hack or retrofit things so that they can work globally, you know what kinds of pitfalls are possible. Think firmware and messaging systems that only support Roman characters, software that renders language based on English grammar rules, interfaces that can’t process bi-directional text, etc. The practice of internationalization itself can be thought of as “making international” those things that were insufficiently designed to be international in the first place.
We can’t just assume that a standard will take globalization into consideration.
As Filip says, “Some JTC1 subcommittees are as old as JTC1 itself, since the 80s. In the 80s, everything was simply in English. Globalization wasn’t an issue.”
It’s understandable why globalization considerations are often an afterthought. When standards are created, those involved are busy focusing on technical challenges. That’s why it’s important that those of us who understand the importance of factors like multilingualism and internationalization compatibility participate and speak up.
2. Ensuring that AI practices support the qualities of trustworthiness, security, and privacy
As the Convener for SG 2 (Trustworthiness of AI), Dr. Filip sees these areas as having significance to globalization’s focus.
“It needs to be possible for humans to understand how AI works,” he says. “You need to have an idea of how the algorithm works and what input the algorithm takes to be able to trust it in general. Transparency needs to be created on every level, starting with data collection and ending with the reliability of the decision-making algorithm.”
As for privacy, it’s an area where those with a stake in the outcome would be well advised to grab a seat at the table. Not all countries share the same vision as to what the right balance between individual end-user privacy, commercial interest, and state interest looks like, but what’s ultimately possible will be impacted by the standards.
“There is a risk,” says Filip, “that without involvement from pro-privacy interests, the standards won’t include the technical methods required to prevent undesired behavior.”
As the language industry endeavors to comply with GDPR and support the compliance of our clients, the extent to which AI standards support their principles and mandates will influence the extent to which AI can be used (or not) in certain applications securely and compliantly, and at what expense.
3. Ensuring that big data is interoperable
It’s in our best interests that the standard supports big data interoperability.
At the 2018 AMTA conference, it was interesting to see companies like Microsoft, Google, and Amazon make their neural machine translation toolkits publicly available, but the “elephant in the room” question (at least for me) was around data. If you’re not Microsoft, Google, or Amazon, where will you get the massive amounts of data you need to make these new classes of engines useful?
It is a subject that has come up in industry forums before. One idea is that smaller companies will need to pool their data as a shared asset. This approach makes sense, not only with MT corpora, but with all kinds of data: content characteristics, content usage, and types of metadata that get generated during the execution of localization. As companies start to specialize, their ability to support one another through the sharing of data will have a direct impact on the health of the ecosystem.
Making your voice heard
There are several ways to be involved with the shaping of SC 42, even if you’re not a PhD researcher creating AI algorithms. Artificial intelligence is a broad enough subject that it requires input and direction from those who have an interest in its outcomes from a variety of perspectives: operational, political, economic, social, and ethical. Anyone using or planning to use AI in their business has something to contribute.
If you’re in a country that has a “national mirror,” you may be able to participate directly through it. A national mirror is a committee designed to discuss a country’s national interest in the standard and ultimately formulate its “national position.”
Moravia’s Maribel Rodriguez Molina joined the Irish mirror. While she isn’t directly involved in AI research, she has a vested interest in the evolution of AI to continue to support her vision that quality management is a layer that’s accessed at every stage in globalization processes.
The ISO has various levels of participation at the national level. Your involvement could potentially push your country from Observing Member status to Participating Member status, or to Observing Member status if your country isn’t involved at all. You can see where your country currently stands here.
Filip advises, “Becoming part of a national mirror is more complex for some countries [than] it is for others. If you can’t be part of the mirror formally, you can participate through public consultation events that the mirror may run.”
At Moravia, we recognize the importance of standards, and try to positively influence them whenever we can. When it comes to something as fundamentally important to our industry as artificial intelligence, we work to make sure that the evolution of relevant standards reflects the interests and values of both us and our clients. We’re at the table. Are you?