Can the U.S. and Europe Agree on Rules for AI?

As EU and U.S. leaders meet in Washington at a joint Trade and Technology Council, there is great need for a proposed “transatlantic accord on artificial intelligence.” But the two sides have differing agendas, and agreement is uncertain.

Just weeks after Joseph Biden was elected President of the United States in 2020, European Commission President Ursula von der Leyen, speaking to the Boston Global Forum, proposed that the U.S. and Europe develop a Transatlantic Agreement on Artificial Intelligence: “We want to set a blueprint for regional and global standards aligned with our values: human rights, and pluralism, inclusion and the protection of privacy.” Such a blueprint could guide other democracies, she said.

Von der Leyen explained why creating such a blueprint is imperative: “AI can have profound impacts on the life of the individual. AI may influence who to recruit for a certain post or whether to grant a certain pension application. For people to accept a role for AI in such decisions, they must be comprehensible. And they must respect people’s legal rights – just like any human decision-maker must.”

Governor Michael Dukakis, chair of the Boston Global Forum, replied, “We are…at one with President von der Leyen on the need for an international accord on the use of artificial intelligence, based on shared values and democratic traditions, an accord that will require sustained transatlantic leadership if it is to be realized.”

Speaking at the Munich Security Conference a few months later, President Biden addressed the impact of new technologies on democratic values, saying, “We must shape the rules that will govern the advance of technology and the norms of behavior in cyberspace, artificial intelligence, biotechnology so that they are used to lift people up, not used to pin them down.  We must stand up for the democratic values that make it possible for us to accomplish any of this, pushing back against those who would monopolize and normalize repression.”

These initial statements from EU and U.S. leaders established the foundation for the EU-US Trade and Technology Council, created in June 2021 to promote transatlantic trade aligned with democratic values. Two years on, the third ministerial-level meeting of the group is coming up next week on Monday, December 5th in Washington, D.C. The Center for AI and Digital Policy, which we lead, has created a resource page to help reporters, policymakers, and the general public follow the sometimes-complicated work of this critical body.

So it’s time to take stock of progress toward a Transatlantic Accord on AI between the U.S. and EU as they seek to advance their joint commitment to drive digital transformation and cooperate on new technologies based on shared democratic values, including respect for human rights. Despite the earlier statements, it’s not clear that significant progress on an accord will emerge from the upcoming Council meeting, which aims to deal with a raft of tech-related issues.

On the EU side, there has been steady progress on an EU AI Act. The Czech Presidency of the Council of the European Union has just wrapped up final changes for the Council position. The European Parliament is moving toward a final report on the proposed legislation. There remain decisions to make about the scope of regulation, the classification of AI systems, and an oversight mechanism.  Such actions depend on the outcome of the “trilogue” among EU institutions–the European Commission, the Council, and the European Parliament, but there is broad agreement on the need for an EU-wide law. And either in parallel with the EU Act or slightly afterward will come a Council of Europe Convention on AI. As with earlier COE Conventions, such as the Budapest Convention on Cybercrime, or Convention 108+ on data protection, the COE AI Treaty will be open for signature by both member and non-member states. That will open the possibility that it could enable a broader international AI treaty uniting democratic nations in support of fundamental rights, the rule of law, and democratic institutions.

But on the U.S. side, the story is more mixed. Secretary Blinken explained the government’s priorities in July 2021: “More than anything else, our task is to put forth and carry out a compelling vision for how to use technology in a way that serves our people, protects our interests and upholds our democratic values.” Although several bills have been introduced in Congress for the regulation of AI, there is no legislation currently heading to the President’s desk requiring safeguards on AI systems, algorithmic accountability or transparency. At the state and local level, new laws are emerging, such as the New York City AI Bias Law. At the federal level, President Trump issued Executive Order 13960 in December 2020, establishing principles for the use of AI in the Federal Government, and requiring federal agencies to design, develop, acquire, and use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties, and American values, consistent with applicable law. However, adoption and implementation of the executive order across agencies varies widely.

In October 2022, The White House Office of Science and Technology Policy released the landmark report Blueprint for an AI Bill of Rights, which could provide the basis for AI legislation in the next Congress. A similar report by a U.S. government agency in the early days of computing led to comprehensive privacy legislation that established baseline safeguards and helped enable the adoption of computing systems across the federal government.

Still, the United States struggles with transparency and public participation in the formulation of its national AI strategy, in a way that might surprise citizens of other democratic nations. The notoriously-secretive National Security Commission on AI (NSCAI), established by Congress in 2018 and chaired by former Google CEO Eric Schmidt, issued a report in 2021 that emphasized the risk of falling behind China in AI, and then disbanded. But subsequently it spawned the Special Competitive Studies Project (SCSP), bankrolled personally by Schmidt. The SCSP has proposed, without irony, a new “technological-industrial” strategy that aims to direct federal funding to the tech industry to maintain a U.S. competitive lead over China. The group’s work muddies the waters, because while it appears to represent the American view, it ignores the social and political consequences of AI deployment.

There is also a newly-established National AI Advisory Committee (NAIAC) that is expected to prepare a report for the President and Congress in the next year on many AI issues, including whether “ethical, legal, safety, security, and other appropriate societal issues are adequately addressed by the nation’s AI strategy.” The Advisory Committee is also expected to make recommendations on opportunities for international cooperation on international regulations and matters relating to oversight of AI systems. But it does not seem to have been consulted about the upcoming meeting of the Trade and Technology Council.

The NAIAC has held two public meetings so far. Both essentially took place just as cyber-broadcasts, with little opportunity for public comment. A last-minute request for public comment before the most recent meeting in October elicited four responses, two from our organization. This process on the U.S. side contrasts sharply with extensive public participation during the early days of development of the EU White Paper on Artificial Intelligence, as well as the draft EU AI Act. Both drew widespread comment in Europe.

Ahead of the upcoming third Trade and Technology Council Ministerial, the EU-based Trade and Technology Dialogue invited a public exchange with the European Commission leaders participating in the meeting. But on the U.S. side, there has been no process for public participation in advance of the meeting, nor has the Commerce Department provided updates about the progress of its working groups.

The difficulties building the TTC transatlantic bridge are surprising, not only because of the earlier statements from EU and U.S. leaders and their apparent shared strategic interests, but also because the EU and the U.S. worked closely together earlier on a global framework for AI and democratic values. The U.S. as well as EU member states led the effort to establish the Organization for Economic Cooperation and Development (OECD) AI Principles, the first global framework for governance of AI. The OECD AI Principles state that governments should promote the development of trustworthy AI that respects human rights and democratic values.

According to POLITICO (subscription required), several announcements are expected at the upcoming meeting, including a “road map” for how trustworthy artificial intelligence can be developed to meet both EU and U.S. needs. That will include efforts, based on existing work from the OECD,  to create a common definition and methodology for how to determine if companies are upholding principles about what can and cannot be done with this emerging technology. Marisa Lago, U.S. Commerce Department Undersecretary for International Trade, recently said to the U.S. Chamber of Commerce: “We think that this is a mutual priority that is going to grow in scope as new AI applications come online and as more authoritarian regimes are taking a very different approach to the issues of security and risk management.”

Still, the recent announcements set a low bar compared with the first meeting of the TTC, when EU and U.S. representatives announced their intent to “cooperate on the development and deployment of new technologies in ways that reinforce our shared democratic values, including respect for universal human rights.” At that meeting in Pittsburgh, negotiators warned that AI can threaten shared values and fundamental freedoms if it is not developed and deployed responsibly or if it is misused. That statement called for responsible development of AI grounded in human rights, inclusion, diversity, innovation, economic growth, and societal benefit. And it specifically called out AI systems that infringe upon fundamental freedoms and the rule of law, “including through silencing speech, punishing peaceful assembly and other expressive activities, and reinforcing arbitrary or unlawful surveillance systems.”

The EU and U.S. negotiators could, for example, follow the lead of Michelle Bachelet, the former High Commissioner for Human Rights at the UN. As Commissioner, Bachelet urged a moratorium on the sale and use of AI that poses a serious risk to human rights until adequate safeguards are put in place. She also called for a ban on AI applications that do not comply with international human rights law. We fully support that recommendation. Now would be the appropriate time for the EU and the U.S. to take at least one urgent step and end the use of facial recognition for mass surveillance, one of the most controversial applications of AI technology.

Part of the problem today is that many in the U.S. government, following the tech industry’s (and Schmidt’s) lead, view AI policy primarily through the China lens, a necessary but incomplete perspective. Since China is now Europe’s primary trading partner, efforts by the U.S. to align Europe behind a predominantly anti-China policy, as was attempted during the Trump years, is unlikely to succeed. And while there is support on the European side for a transatlantic call for “democratic values,” there is also growing skepticism and a belief that the U.S. formulation is little more than a trade policy aimed at conferring national economic advantage.

But von der Leyen’s call for a transatlantic AI accord based on human rights, pluralism, inclusion and the protection of privacy resonates today on both sides of the Atlantic. Indeed, the first goal of the TTC, endorsed by von der Leyen and Biden, was to ensure that the EU and the U.S. “Cooperate in the development and deployment of new technologies based on shared democratic values, including respect for human rights.”

Both the U.S. and the EU must now quickly take concrete steps as the challenges of AI governance mount. The EU and the U.S. both need to carry forward into legislative outcomes the commitments made at the first TTC.

This is necessary not only to safeguard our own democratic societies but also to make clear to other countries that are moving forward national AI strategies that mere technical standards are not a substitute for the rule of law.  A recent Manifesto prepared by scholars on both sides of the Atlantic called attention to concerns about the growing weakness of democratic institutions, particularly when it comes to implementing effective technology policy. The scholars warned of AI’s potential to undermine existing law and fundamental rights, and explained that there is a “growing gap between AI development and our institutions’ capabilities to properly govern them.”

Whether it will be possible for the U.S. and Europe to close that gap depends urgently on the outcome of the upcoming Trade and Technology Council meeting.

Marc Rotenberg and Merve Hickok are President and Chair of the Center for AI and Digital Policy, a global network of AI policy experts and advocates in more than 60 countries. The Center publishes the AI and Democratic Values Index, the first report to rate and rank national AI policies and practices.

Related Posts
See All

Techonomy 23 to Focus On the Promise and the Peril of AI

As EU and U.S. leaders meet in Washington at a joint Trade and Technology Council, there is great need for a proposed “transatlantic accord on artificial intelligence.” But the two sides have differing agendas, and...

12 Energy Dilemmas the World Needs to Address

As EU and U.S. leaders meet in Washington at a joint Trade and Technology Council, there is great need for a proposed “transatlantic accord on artificial intelligence.” But the two sides have differing agendas, and...

Oppenheimer’s Legacy: Nuclear Energy?

As EU and U.S. leaders meet in Washington at a joint Trade and Technology Council, there is great need for a proposed “transatlantic accord on artificial intelligence.” But the two sides have differing agendas, and...

Inspiring Sustainable Play at IGT and Beyond

As EU and U.S. leaders meet in Washington at a joint Trade and Technology Council, there is great need for a proposed “transatlantic accord on artificial intelligence.” But the two sides have differing agendas, and...