The EU’s Artificial Intelligence Act: A Pragmatic Approach

Europe is forging ahead with rational regulation of artificial intelligence. It will likely influence the entire world as much as did the GDPR on privacy.

The European Union has introduced a proposal to regulate the development of AI, with the goal of protecting the rights and well-being of its citizens. The Artificial Intelligence Act (AIA) is designed to address certain potentially risky, high-stakes use cases of AI, including biometric surveillance, bank lending, test scoring, criminal justice, and behavior manipulation techniques, among others. The goal of the AIA is to regulate the development of these applications of AI in a way that will foster increased trust in its adoption.

Similar to the EU’s General Data Protection Regulation (GDPR), the AIA law will apply to anyone selling or providing relevant services to EU citizens. GDPR spearheaded data privacy regulations across the United States and around the world. We expect this law will have a similar global impact.

Governmental oversight of emerging technology typically raises the tech community’s hackles. It envisions slow progress, inefficiency, and bureaucratic overhead, all of which clash with the tech community’s “move fast and break things” attitude. However, this particular initiative is both necessary and balanced. Some applications of AI, such as broad and indiscriminate surveillance in public spaces, can have tremendous societal impacts, and clearly require careful and thoughtful treatment. Recent experience with social networks’ poor data stewardship, just to cite one example, suggests that leaving the industry to self-regulate will be neither sufficient nor effective. 

The EU’s proposal is neither novel nor unique; it is similar to many local U.S. efforts where legislation is already underway. Portland, Oregon, for example, is one of the most recent cities to pass a law banning facial recognition technology. However, such regulations are local and fragmented, congruent with America’s data privacy laws. The EU’s approach to the use of AI has merits because of its coherent nature. (It generally will ban real-time remote biometric identification systems, including facial recognition, in publicly accessible spaces, while allowing for some law enforcement exceptions.) 

Not Banning AI

The Artificial Intelligence Act is not a quixotic attack on AI. Its specificity indicates a careful consideration of the AI’s benefits as well as its risks. The regulation draft is accompanied by an investment proposal, which indicates an attempt to formulate an objective and measured approach to innovation and societal considerations.

Ambitious legislation such as this cannot escape skepticism and scrutiny, and affected constituencies are already weighing in. Some citizens’ rights groups have criticized the proposal as not being sufficiently stringent in restricting surveillance, while security agencies have begun advocating for exemptions and commercial interests are deriding the proposal for stymieing innovation. On this last point: while it is true Europe lags behind the U.S. and China in AI technology development, it is an exaggeration to suggest that “the regulation would kneecap the EU’s nascent AI industry before it can learn to walk,” as one industry-oriented policy group told TechCrunch. Given its focus on “risky” applications, this Act is unlikely to be an impediment to advancements outside restricted use cases. There are also precedents, such as the existing review process for AI for risk management used by banks, which can serve as models for AIA. 

We expect that upcoming debates will clarify the issues, address potential weaknesses, and refine the proposal, as the Act makes its way through legislative deliberations. 

One area of weakness is Annex I, which attempts to define AI by listing specific underlying techniques, including machine learning, logic and knowledge-based approaches, Bayesian estimation, and others. Such a list will inevitably be incomplete and will fail to anticipate novel methods that will develop over time. A far better approach would be to define AI functionally – i.e., by what it does – versus the means by which such functionality is attained. 

What Will the US Do?

The EU’s regulation of technology has played a role in shaping American attitudes. The adoption of GDPR in the EU led to a greater awareness of consumer data privacy rights in the United States. While this increased focus has not yet led to a national data privacy standard, individual states such as California are instituting laws based on GDPR. The adoption of the AIA will, we hope, give rise to similar awareness and action in the United States. 

There is already momentum towards this in local American communities, if not yet at the federal level. Given concern over the use of AI in police surveillance, for instance, there is interest in reducing or eliminating biases currently built into the software. However, as with data privacy laws, a unified body of federal legislation is likely to be more effective than piecemeal local efforts.  

In the face of these complicated issues, the EU is taking an approach that is both thoughtful and pragmatic, stimulating a much-needed global dialogue about AI and the need for targeted regulation.

Related Posts
See All

Inflated Expectations: Artificial Intelligence Still Depends on Humans

Europe is forging ahead with rational regulation of artificial intelligence. It will likely influence the entire world as much as did the GDPR on privacy.

AI Can Unlock the Economic Gains of Gender Equity

Europe is forging ahead with rational regulation of artificial intelligence. It will likely influence the entire world as much as did the GDPR on privacy.

How AI Took Center Stage at Techonomy

Europe is forging ahead with rational regulation of artificial intelligence. It will likely influence the entire world as much as did the GDPR on privacy.

Endless Questions on AI at a Techonomy Dinner

Europe is forging ahead with rational regulation of artificial intelligence. It will likely influence the entire world as much as did the GDPR on privacy.