The European Commission recently released its General-Purpose AI Code of Practice—a framework meant to help companies demonstrate compliance with the EU’s Artificial Intelligence Act (AIA). Although technically voluntary, companies that fail to sign on risk steep penalties and heightened regulatory scrutiny. With the Act’s “general obligations” set to take effect August 2nd, U.S. companies face a difficult choice: adopt the code and commit to strict development constraints or risk multimillion-euro fines and costly legal battles. Either way, innovation and competitive AI development may be the biggest losers.
The AI Act is a sweeping legislative effort that sets requirements for AI development, mandating transparency for copyrighted training data, and banning the creation of certain models—particularly those using facial recognition. While already burdensome, industry leaders warn that the new Code of Practice further escalates oversight by requiring routine model testing and giving regulators unobstructed access to AI systems.
The main criticism of the Act is its confusing nature. AI models are placed into three overlapping risk categories—minimal, high, and unacceptable—none of which have an established clear, empirical threshold. This ambiguity leaves room for politically motivated enforcement and inconsistent compliance expectations.
The Act’s rollout has been similarly confusing: it officially entered force in 2024; bans on prohibited models went into effect last February; general-purpose model obligations (e.g., ChatGPT, Llama, Grok) take effect in August; and high-risk system regulations begin in 2026. With 4 implementation stages and 3 different risk thresholds, none of which are clearly defined, tech companies are struggling to know which requirements apply and when, fueling regulatory uncertainty and rising compliance costs.
Additionally, the use of vague language and newly introduced legal terms has further added to the headaches. “Systemic risk” has been introduced to address terms for general-purpose models that pose broad societal risks that are “difficult to quantify”. Unfortunately, this also means that what constitutes systemic risk is equally difficult to quantify. With “systemic” remaining undefined, developers are uncertain about how to maintain compliance.
The economic impact of this ambiguity is enormous. Compliance costs could raise AI development overhead by 17%, with early estimates suggesting companies have already spent €30 billion in early compliance costs. Profit is expected to fall 40%. Small and medium-sized enterprises will be hit hardest: over 50% expect sharp cost increases, and 16% are already considering relocation. The rising costs and decreased profitability are expected to reduce total investment into AI technologies by 20-30%. These burdens risk turning Europe from a leading innovator into a regulatory dead zone.
The new Code of Practice fails to clarify key questions and instead amplifies concerns about political bias. European courts have a track record of targeting U.S. firms under vague rules—GDPR being a clear example. With risk thresholds undefined and legal terms malleable, U.S. companies are left exposed to costly, politically motivated legal actions.
In trying to improve transparency, the EU has constructed a regulatory labyrinth, trapping a bullish AI market. The AI Act threatens to stall innovation, discourage investment, and suffocate Europe’s AI sector under layers of uncertainty. The Code of Practice, far from a solution, may be the final straw.
AI holds the potential to revolutionize productivity, research, and wealth creation. But innovation cannot flourish through this regulatory nightmare. EU AIA doesn’t just fall short of its goal to enable AI innovation—it may spell the end of the sector all together.
Click this link for the original source of this article.
Author: Caden Hubbs
This content is courtesy of, and owned and copyrighted by, https://www.atr.org and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.