Why now is the time to start an AI regulation group in your business

George Bara 03 Jun 2021 6 min read
AI regulation EU

If you are using or producing Artificial Intelligence solutions in your organization, you need to establish an AI regulation advisory group as soon as possible. It’s not just nice-to-have anymore. The European Commission published the first ever legal framework proposal on AI, which suggests that anyone deploying AI will have to adhere to national and European specific regulations – or face huge fines. Every EU member state already has, or is working on, a National AI strategy that will incorporate this framework and will transform it into national legislation. 

The issue of AI regulation is not just relevant at the European level. On January 13, 2020 the U.S. government published draft rules for the regulation of Artificial Intelligence (AI). It’s clear that the production and use of AI will be regulated worldwide in the next couple of years. 

So why does AI need regulation?

Artificial Intelligence is still an emerging discipline within the broader IT and software industry, offering huge potential in terms of impact. It is this promised impact on society that makes it extremely valuable, but also very dangerous.

Building and deploying AI that is safe is paramount, so regulations are inevitable (similar to how we have them implemented in Regulated Industries such as Nuclear, Finance, or Energy). One year ago I debated in a LinkedIn post that the explainability of AI (XAI – eXplainable AI) is deeply connected to the regulation of AI because regulation of AI stems from the fact that AI technology is not deterministic (meaning you cannot predict its outcome for a new input data set), and that most Deep Neural Network systems are “black boxes” that do not predict the outcome based on input data, or understand how the result was achieved. 

Now, imagine that an AI system that has a chance to produce undesirable behavior due to poor processes or supervision is responsible for determining your credit scoring, deciding municipality taxes, producing medical imaging results or flagging you to law enforcement as a potential future criminal, “Minority Report”-style. Going beyond the impact that AI has, and will increasingly have, on society and its citizens, we can also ponder about the “weaponization” of AI and its use in online disinformation, unmanned military vehicles, cyber warfare, and so on.

Artificial Intelligence, like any other software solution that has an impact, requires oversight from development to production and beyond: from dataset collection and management, building the Machine Learning models, deployment, use, human-in-the-loop failsafe approaches, explainability of outputs and maintenance, and education of users. Every aspect of AI would need to adhere to a generally accepted set of ethical guidelines transposed into regulation and legislation, so that AI doesn’t end up doing or being used for the wrong thing.

The proposed EU AI regulation in a nutshell

The regulations proposed by the European Commision have a broad scope, meaning that providers, importers, distributors and users are all affected. AI systems will be classified by risk: unacceptable, high-risk, limited risk, and minimal (unregulated). It strictly forbids the use of AI for any type of social scoring by any public authority, and the use of biometric identification systems in public spaces that targets the general public. The high-risk AI systems include software for recruiting, determining access to social benefits, controlling migration, and assisting judicial interpretation. Although still vague and incomplete, the list of high-risk AI is set to be further expanded through an EU database for high-risk AI practices.

The entire document focuses on the European Union’s fundamental values and how AI systems could interfere or restrict its citizen's rights, from exploiting vulnerabilities of specific groups to unfair and unfavorable treatment. The proposal now goes to the European Parliament and Council for further consideration and debate, and most likely it will then be incorporated into legislation. 

The impact and mitigation strategy

The key aspect of this proposed regulation is that from a vendor perspective, the impact of selling AI systems into Europe is significant, regardless of where the vendor’s HQ or production facilities are located. This means that if you are producing AI software and want to sell it in the European Union, you will need to comply with this regulation. This is both an opportunity and threat. Opportunity because well-established AI vendors that have already incorporated proper ethics-based processes into their AI systems will have a head start. And a threat if your organization is not prepared for this approach. 

But a first step would be to incorporate the current proposed regulation into your AI R&D process, from a strategic perspective, guided by an AI Regulation group or task force, that can determine where your organization sits currently compared to the future legislation, and create a compliance roadmap. The group should oversee the end-to-end process for AI software design, production, marketing, deployment, and maintenance, mapping internal processes to the EU regulation, and make sure that regulation is an equal driver of the roadmap to the market business needs.

Ultimately there will be an impact on every company producing, selling, importing or deploying AI systems in Europe, so preparation for when the regulation becomes official needs to start now.

Click here if you would like to learn how we're using AI across our machine translation technology.

George Bara
Author

George Bara

Business Consultant
George Bara is a RWS Business Consultant for machine translation-related technologies and products. With a background on relational databases and rapid application development, George is working with RWS’s commercial and government partners and customers focusing on text analytics, eCommerce and online customer support.
All from George Bara