Connect with us

Regional News

EU’s New AI Code of Practice Could Set Regulatory Standard for American Companies

Published

on

American companies are split between support and criticism of a new voluntary European AI code of practice, meant to help tech companies align themselves with upcoming regulations from the European Union’s landmark AI Act.

The voluntary code, called the General Purpose AI Code of Practice, which rolled out in July, is meant to help companies jump-start their compliance. Even non-European companies will be required to meet certain standards of transparency, safety, security and copyright compliance to operate in Europe come August 2027.

Many tech giants have already signed the code of practice, including Amazon, Anthropic, OpenAI, Google, IBM, Microsoft, Mistral AI, Cohere and Fastweb. But others have refused.

In July, Meta’s Chief Global Affairs Officer Joel Kaplan said in a statement on Linkedin that the company would not commit.

“Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it,” he wrote. “This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

Though Google’s President of Global Affairs Kent Walker was critical of the code of practice in a company statement, Google has signed it, he said.

“We remain concerned that the AI Act and Code risk slowing Europe’s development and deployment of AI,” Walker wrote. “In particular, departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe’s competitiveness.”

The divergent approach of U.S. and European regulators has showcased a clear difference in attitude about AI protections and development between the two markets, said Vivien Peaden, a tech and privacy attorney with Baker Donelson.

She compared the approaches to cars — Americans are known for fast, powerful vehicles, while European cars are stylish and eco-friendly.

“Some people will say, I’m really worried that this engine is too powerful. You could drive the car off a cliff, and there’s not much you can do but to press the brake and stop it, so I like the European way,” Peaden said. “My response is, ‘Europeans make their car their way, right? You can actually tell the difference. Why? Because it was designed with a different mindset.”

While the United States federal government has recently enacted some AI legislation through the Take It Down Act, which prohibits AI-generated nonconsensual depictions of individuals, it has not passed any comprehensive laws on how AI may operate. The Trump administration’s recent AI Action Plan paves a clear way for AI companies to continue to grow rapidly and unregulated.

But under the EU’s AI Act, tech giants like Amazon, Google and Meta will need to be more transparent about how their models are trained and operated, and follow rules for managing systemic risks if they’d like to operate in Europe.

“Currently, it’s still voluntary,” Peaden said. “But I do believe it’s going to be one of the most influential standards in AI’s industry.”

General Purpose AI Code of Practice

The EU AI Act was passed last year to mitigate risk created by AI models, and the law creates “strict obligations” for models that are considered “high risk.” High risk AI models are those that can pose serious risks to health, safety or fundamental rights when used for employment, education, biometric identification and law enforcement, the act said.

Some AI practices, including AI-based manipulation and deception, predictions of criminal offenses, social scoring, emotion recognition in workplaces and educational institutions and real-time biometric identification for law enforcement, are considered “unacceptable risk” and are banned from use in the EU altogether.

Some of these practices, like social scoring — using an algorithm to determine access to certain privileges or opportunities like mortgages or jail time — are widely used, and often unregulated in the United States.

While AI models that will be released after Aug. 2 already have to comply with the EU AI Act’s standards, large language models (LLMs) — the technical foundation of AI models — released before that date have through August 2027 to fully comply. The code of practice released last month offers a voluntary way for companies to get into compliance early, and with more leniency than when the 2027 deadline hits, it says.

The three chapters in the code of practice are transparency, copyright and safety, and security. The copyright requirements are likely where American and European companies are highly split, said Yelena Ambartsumian, founder of tech consultancy firm Ambart Law.

In order to train LLMs, you need a broad, high-quality dataset with good grammar, Ambartsumian said. Many American LLMs turn to pirated collections of books.

“So [American companies] made a bet that, instead of paying for this content, licensing it, which would cost billions of dollars, the bet was okay, ‘we’re going to develop these LLMs, and then we’ll deal with the fallout, the lawsuits later,” Ambartsumain said. “But at that point, we’ll be in a position where, because of our war chest, or because of our revenue, we’ll be able to deal with the fallout of this fair use litigation.”

And those bets largely worked out. In two recent lawsuits, Bartz v. Anthropic and Kadrey v. Meta, judges ruled in favor of the AI developers based on the “fair use” doctrine, which allows people to use copyrighted material without permission in certain journalistic or creative contexts. In AI developer Anthropic’s case, Judge William Alsup likened the training process to how a human might read, process, and later draw on a book’s themes to create new content.

But the EU’s copyright policy bans developers from training AI on pirated content and says companies must also comply with content owners’ requests to not use their works in their datasets. It also outlines rules about transparency with web crawlers, or how AI models rake through the internet for information. AI companies will also have to routinely update documentation about their AI tools and services for privacy and security.

Those subject to the requirements of the EU’s AI Act are general purpose AI models, nearly all of which are large American corporations, Ambartsumain said. Even if a smaller AI model comes along, it’s often quickly purchased by one of the tech giants, or they develop their own versions of the tool.

“I would also say that in the last year and a half, we’ve seen a big shift where no one right now is trying to develop a large language model that isn’t one of these large companies,” Ambartsumain said.

Regulations could bring markets together

There’s a “chasm” between the huge American tech companies and European startups, said Jeff Le, founder and managing partner of tech policy consultancy 100 Mile Strategies LLC. There’s a sense that Europe is trying to catch up with the Americans who have had unencumbered freedom to grow their models for years.

But Le said he thinks it’s interesting that Meta has categorized the code of practice as overreach.

“I think it’s an interesting comment at a time where Europeans understandably have privacy and data stewardship questions,” Le said. “And that’s not just in Europe. It’s in the United States too, where I think Gallup polls and other polls have revealed bipartisan support for consumer protection.”

As the code of practice says, signing now will reduce companies’ administrative burden when the AI Act goes into full enforcement in August 2027. Le said that relationships between companies that sign could garner them more understanding and familiarity when the regulatory burdens are in place.

But some may feel the transparency or copyright requirements could cost them a competitive edge, he said.

“I can see why Meta, which would be an open model, they’re really worried about (the copyright) because this is a big part of their strategy and catching up with OpenAI and (Anthropic),” Le said. “So there’s that natural tension that will come from that, and I think that’s something worth noting.”

Le said that the large AI companies are likely trying to anchor themselves toward a framework that they think they can work with, and maybe even influence. Right now, the U.S. is a patchwork of AI legislation. Some of the protections outlined in the EU AI Act are mirrored in state laws, but there’s no universal code for global companies.

The EU’s code of practice could end up being that standard-setter, Peaden said.

“Even though it’s not mandatory, guess what? People will start following,” she said. “Frankly, I would say the future of building the best model lies in a few other players. And I do think that … if four out of five of the primary AI providers are following the general purpose AI code of practice, the others will follow.”

Editor’s note: This item has been modified to revise comments from Jeff Le.

by Paige Gross, Virginia Mercury


Virginia Mercury is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Virginia Mercury maintains editorial independence. Contact Editor Samantha Willis for questions: info@virginiamercury.com.

Front Royal, VA
50°
Partly Cloudy
7:08 am5:45 pm EST
Feels like: 48°F
Wind: 5mph SW
Humidity: 67%
Pressure: 29.76"Hg
UV index: 0
WedThuFri
48°F / 30°F
39°F / 25°F
45°F / 27°F
Regional News7 hours ago

All Democratic Governors Bow Out of White House Dinner After Trump Snubs Several

Local Government7 hours ago

Front Royal Energy Department Faces $1.1 Million Deficit, Rate Increase Proposed

State News7 hours ago

Bill Would Put More Energy Costs on Data Centers, Slash Residential Customers’ Rates

Community Events8 hours ago

Chipping Into Hope: Fire & Ice Golf Classic Raises Funds to Keep Families Together

Opinion9 hours ago

When Will Warren County Get Representation, Not Rhetoric?

Local Government10 hours ago

How a Young Map Enthusiast Is Opening Up Front Royal, One Layer at a Time

Health15 hours ago

AEDs Save Lives—Here’s What You Need to Know This American Heart Month

Real Estate15 hours ago

Ask the Expert: What Are Contingencies in a Real Estate Deal?

Interesting Things to Know16 hours ago

How to Choose Music for a Loved One’s Funeral

Local News1 day ago

David Silek Launches Campaign for Chairman of Warren County GOP: “Common-Sense Conservative Leadership”

Local News1 day ago

Delegates Wiley and Oates Criticize Redistricting Maps, Gun Control Bills in Richmond Update

Community Events2 days ago

Sip, Support, and Sparkle: Rotary Club of Warren Plans Scholarship Gala

Opinion2 days ago

Good Governance and the Restoration of the Samuels Library Public/Private Partnership

Community Events2 days ago

2 For 2 Foundation Hosts “Pot of Gold” Event at Piccadilly Pub to Fund Local Swim Literacy, Ensuring 100% of Donations Go Directly to Lessons

Local News2 days ago

Scott Lloyd Announces Bid for Chairman of Warren County Republican Committee

Paws & Claws2 days ago

National Pet Theft Awareness Day Highlights Rising Concern Over Dog Theft

Local News2 days ago

Walk for Peace Brings Moment of Stillness to Route 1 in Lorton

Agriculture2 days ago

Bill to Ban the Use of Herbicide Paraquat in Virginia Advances With Narrow Vote

Food2 days ago

What to Do with a Leftover Baked Potato? Try These 5 Easy Ideas

Town Notices2 days ago

Town Council Public Hearings for February 23rd

Local News2 days ago

VDOT: Warren County Traffic Alert for February 9 – 13, 2026

Food2 days ago

Focaccia with Olives and Sun-Dried Tomatoes: A Soft, Golden Bread with a Savory Twist — Perfect for Sharing

Job Market3 days ago

Skilled Trades Are the Future of Jobs

Interesting Things to Know3 days ago

This Valentine’s Day, Celebrate Love in All Its Forms—Not Just the Romantic Kind

Interesting Things to Know3 days ago

Understanding Trump Accounts: Building Wealth for Kids