picture of EU flags

By: Laura T. Geyer, Of Counsel, and Andrew J. Costa, Associate


Despite the fervor surrounding AI in the United States, this month the European Union leapfrogged the US and passed the world’s first comprehensive AI regulatory scheme. It should come as no surprise to those of us who’ve watched closely that the EU’s Artificial Intelligence Act (the “Act”) mirrors many of the common values found in emerging consensuses around the world – like the G7’s Hiroshima Process and President Biden’s October 2023 Executive Order on AI.

But what’s special about the Act, and what does it mean for US businesses who either operate in the EU, or whose products and services reach EU markets? And what’s more, could the Act be a precursor for something similar in the US? In this article we first explore the Act with particular focus on its novel categorization system for AI and then identify important considerations for US businesses going forward. An effective AI strategy should include, at a minimum, being proactive in addressing potential AI risks, tailoring internal policies to accommodate its use (or non-use), and careful due diligence with vendors and suppliers on their AI use. The Act provides for steep penalties for noncompliance (including up to 7% of a company’s global revenue) and is extraterritorial in scope, which means US businesses could face penalties if their goods/services reach EU markets. We discuss all this and more below.

What Is the Artificial Intelligence Act?

The Artificial Intelligence Act (the “Act”) is a landmark set of regulations passed by Members of the European Parliament (MEPs) on March 13, 2024 that seeks to set the standard for how governments can effectively regulate AI across the globe. Although negotiations took five years, the Act ultimately passed with impressive unanimous support from the EU’s 27 member states (or almost 70% of the voting member body). It’s the first comprehensive AI regulation to be adopted by any government and, like much of the emerging consensus on guidance around the world, it endorses a “human-centric” approach to AI regulation – meaning that it seeks (1) to keep humans in control of the technology (not vice-versa) and (2) to ensure the technology both safely and effectively helps society advance and humans thrive. Like efforts in the US (i.e., the President’s October Executive Order on AI), the Act adopts a “risks-based” approach that tailors AI scrutiny to the degree of risk posed by the specific category of AI (more on that below).

The Act, however, is not the official law of the land yet. A few minor quirks of EU parliamentary procedure remain – none of which pose any serious threat to the Act’s “entry into force.” These “quirks” include a “lawyer-linguist” check, a procedure called corrigendum that identifies textual and drafting errors, and a final adoption by the EU Council. The EU expects these procedures to complete by May 2024, when the Act is formally “published” in the EU Journal, after which the Act will enter force 20 days later – so around June 2024. That said, the Act’s most burdensome provisions will not take immediate effect and instead will be slowly implemented over the subsequent 36 months into 2026.

Key Attributes of the Act

 Of the Act’s various features, the most important is its classification system for distinguishing AI by the level of risk the technology presents. It’s this classification that determines the obligations, restrictions and requirements that must be met. The most risky applications are banned, while the least are left unregulated. The categories are: (1) unacceptable risks; (2) high-risk applications; (3) limited (or transparency) risks; and (4) minimal risks. Interestingly, the Act does not provide examples of satisfactory compliance, but instead leaves it to AI providers to develop measures that meet the Act’s general requirements. Before diving into the specifics of each category, it’s important to note that the Act also requires each EU state to establish its own enforcement agency to ensure compliance, and levies steep fines against non-compliant businesses of upwards of $35M or 7% of the business’s global revenue (see the Act, Article 71, pg. 225 here). Thus, it is paramount that businesses be aware of these risk classifications; determine into which category their products and services fall; and comply with the respective requirements – even if their products or services only remotely touch to the EU market.

First, AI systems classified as an “unacceptable risk” are completely banned under the Act. These include technologies like: biometric categorization systems (i.e., systems that make inferences about a person’s race, political opinions, union status, religious beliefs, sexual orientation, etc.); social scoring systems; facial recognition systems scrapped from internet photographs or CCTV; workplace or educational institution emotional inference systems; real-time biometric systems (RBI) in public places (excluding law enforcement, in limited circumstances); and systems that deploy manipulative or deceptive techniques to engineer social behavior or exploit vulnerabilities of AI users. These services are entirely unlawful in the EU.

Next, AI systems labeled “high risk” are subject to the most stringent regulation – in fact most of the Act addresses these kinds of technologies. Examples include AI systems that relate to: critical infrastructure (i.e., systems intended as safety components in roadways, water, gas, electricity, etc.); biometric identification (i.e., systems that strictly verify a person’s identity); educational or vocational training (including admissions evaluative software, systems that assess learning outcomes, evaluate test results, etc.); employment selection; public and private commercial services and law enforcement. For these “high-risk” AI systems, developers and providers must, among other things: (1) establish a risk management system; (2) perform data governance to ensure data integrity; (3) create technical documentation (that demonstrates compliance and provides authorities with proper information); (4) deploy record-keeping processes for events throughout the AI’s life cycle; and (5) ensure the accuracy and cybersecurity of the services.

Within the “high risk” category is also what the Act calls “General Purpose AI” – a novel distinction that includes large language model AIs like Chat-GPT, Midjourney and Dall-E. For these systems, the Act requires providers to (1) create technical documentation that addresses training, testing and evaluation results; (2) create information and documentation for use by downstream providers; (3) establish policies that respect the EU’s Copyright Directive; and (4) publish summaries that detail the content used for training.

Lastly, for AI systems that are classified as “limited” or “transparency” risks, the Act requires providers to disclose that content is AI-derived and inform the public that the product or service is indeed an AI system. These include systems like customer service chatbots and the like. Other “minimal risk” systems, like spam filters, recommendation engines, and the like are left unregulated.

Could the EU AI Act Be a Harbinger of a US AI Act?

Yes and no. While it is possible that the United States could adopt a similar system for addressing the use of AI by legislation or Executive Order, that is not imminent, given the requisite agreement among lawmakers and time for passing legislation on the one hand and the limited scope and permanence of an Executive Order on the other. However, companies should assume that aspects of the EU AI Act’s rubric will become part of US law and practice over time as interest in and concern about AI accelerates and because AI already implicates issues that are embedded in existing law or could be raised in civil, criminal, or administrative contexts. These issues will become more politically urgent as stories that reflect the dangers of AI on a personal level, especially horror-stories, arise in the media and become part of the ongoing public debate about AI.

  • We can expect that US law concerning AI will reflect a generally more pro-business and innovation-oriented approach than the typically more human-centric EU Act. This dichotomy was clear from the differing US and EU approaches to data privacy expressed by the GDPR and EUDPR.
  • The most likely manifestation of AI law in the US will resemble the risk-based approach evident in the Biden Administration’s October 2023 Executive Order on AI.
  • Any US law will likely include key, universal, values that overlap those in the EU AI Act. These might include protection of individual information already treated as “private” under state or federal law; consumer protection against unauthorized, unethical, or deceptive uses of AI (g., biometric data in public spaces, social scoring or manipulative advertising); or as a bulwark against offensive uses of AI to corrupt the political system or disrupt critical infrastructure.

What is clear, however, is that AI use in both the public and private sectors is already much broader than what the general public perceives (mainly large language models, like Chat-GPT), and any forthcoming US regulation will likely seek to address nefarious (or dangerous) applications before others.

What Should US Businesses do?

If a US business provides an AI product or service, they would be well-advised to comply with the EU AI Act as far as possible, as its application is extraterritorial. The international availability of the sort of data and services covered by the Act means that hiding under a “but we only do business in the US” umbrella may not go far to protect a company from enforcement under the Act or public opprobrium arising from “bad things” that happen in Europe when AI results in damage to persons or entities there. That means, at a minimum, complying with disclosure and transparency requirements, and identifying ways in which a company’s activities (whether deliberate or latent) may implicate other aspects of AI covered by the Act and modifying technologies and systems to avoid trouble.

US businesses may also unknowingly depend on AI-generated data or AI technologies in making, advertising, and delivering goods or services (whether internally or through vendors). Companies should thoroughly review their systems (hiring consultants if need be) to identify potential uses of AI and understand where key vulnerabilities lie. It is far better to be proactive in addressing AI-related issues than reactive, as many companies have learned to their sorrow. We do know that the Act requires businesses to create and implement their own compliance systems — but offers no model to work from. We currently also have no idea what enforcement of the Act will look like. Will a company face civil litigation? Criminal enforcement? Agency action in one of the agencies that the Act empowers each member state to create?

Since we will only find out what enforcement under the Act looks like by watching it happen, it would be best from a risk-management and public relations perspective to make reasonable efforts to be compliant with the Act. In legal venues and in the public arena “good faith” efforts at compliance, especially where there is limited guidance, has traditionally been, and can be expected to continue to be, protective and helpful. Thus, please join us next month, when we will discuss in more detail the elements we would recommend for an AI governance plan, which is relevant not only for businesses that may be subject to the Act but also businesses that truly operate solely within the United States.

First of its Kind – the EU AI Act Passes, and What it Means for US Businesses
Tagged on: