The adoption of the Artificial Intelligence (AI) Act in the European Union (EU) this year has triggered speculation about the potential for a ‘Brussels effect’: when EU regulation has a global impact as companies adopt the rules to make it easier to operate internationally, or new laws elsewhere are based on the EU’s approach. The ways in which the General Data Protection Regulation (GDPR) — the EU’s rules on data privacy — influenced state-level legislation and corporate self-governance in the United States is a prime example of how this can happen, particularly when federal legislation is stalled and states take the lead, which is where US AI governance is today.

So far, there is limited evidence that states are following the EU’s lead when drafting their own AI legislation. There is strong evidence of lobbying of state legislators by the tech industry, which does not seem keen on adopting the EU’s rules, instead pressing for less stringent legislation that minimizes compliance costs but which, ultimately, is less protective of individuals. Two enacted bills in Colorado and Utah and two draft bills in Oklahoma and Connecticut, among others, illustrate this.

A major difference between the state bills and the AI Act is their scope. The AI Act takes a sweeping approach aimed at protecting fundamental rights and establishes a risk-based system, where some uses of AI, such as the ‘social scoring’ of people based on factors such as their family ties or education, are prohibited. High-risk AI applications, such as those used in law enforcement, are subject to the most stringent requirements, and lower-risk systems have fewer or no obligations.

In contrast, the state bills are narrower. The Colorado legislation directly drew on the Connecticut bill, and both include a risk-based framework, but of a more limited scope than the AI Act. The framework covers similar areas — including education, employment and government services — but only systems that make ‘consequential decisions’ impacting consumer access to those services are deemed ‘high risk’, and there are no bans on specific AI use cases. (The Connecticut bill would ban the dissemination of political deepfakes and non-consensual explicit deepfakes, for example, but not their creation.) Additionally, definitions of AI vary between the US bills and the AI Act.

Although there is overlap between the Connecticut and Colorado bills and the AI Act in terms of the documentation they require companies to create when developing high-risk AI systems, the two state bills bear a much stronger resemblance to a model AI bill created by US software company Workday, which develops systems for workforce and finance management. The Workday document, which was shared in an article by cybersecurity news platform The Record in March, is structured around the obligations of AI developers and deployers, and regulates systems used in consequential decisions, just like the Colorado and Connecticut bills. Indeed, the documentation that those bills say AI developers should produce is similar in scope and wording to an impact assessment that the Workday draft bill suggests should be produced alongside proposals for AI systems. The Workday document also contains language similar to bills introduced in California, Illinois, New York, Rhode Island and Washington. A spokesperson for Workday says it has been transparent about playing “a constructive role in advancing workable policies that strike a balance between protecting consumers and driving innovation”, including “providing input in the form of technical language” informed by “policy conversations with lawmakers” globally.

The wider tech industry’s power, however, can extend beyond this kind of passive inspiration. The Connecticut draft bill did contain a section on generative AI inspired by part of the AI Act, but it was removed after concerted lobbying from industry. And although the bill then received support from some big tech companies, it is still in limbo. Industry associations maintain that the bill would stifle innovation, causing the governor of Connecticut, Ned Lamont, to threaten to veto it. Its progress is frozen, as are many of the other more comprehensive AI bills being considered by various states. The Colorado bill is expected to be altered to avoid hampering innovation before it takes effect.

One explanation for the lack of a Brussels effect and a strong ‘big-tech effect’ on state laws is that, compared with discussions around data-protection measures over GDPR, the legislative debate on AI is more advanced at the US federal level. This includes a policy roadmap from the Senate, and active input from industry players and lobbyists. Another explanation is the hesitancy embodied by Governor Lamont. In the absence of unified federal laws, states fear that strong legislation would cause a local tech exodus to states with weaker regulations, a risk less pronounced in data-protection legislation.

For these reasons, lobbying groups claim to prefer national, unified AI regulation over state-by-state fragmentation, a line that has been parroted by big tech companies in public. But in private, some advocate for light-touch, voluntary rules all round, showing their dislike of both state and national AI legislation. If neither kind of regulation emerges, AI companies will have preserved the status quo: a bet that two divergent regulatory environments in the EU and United States — with a light-touch regime in the latter — favour them more than the benefits of a harmonized, yet heavily regulated, system.

As with the GDPR, there might be some cases where compliance with EU rules makes business sense for US firms, but it would mean the United States would be left overall less regulated, meaning that individuals will be less protected from AI abuses. Although Brussels faced its fair share of lobbying and compromises, the core of the AI Act remained intact. We will see if US state laws stay the course.

Competing Interests

The author declares no competing interests.



Source link


administrator