A coalition of over 60 civil society organizations has issued an urgent call to EU legislators to preserve a critical EU AI Act transparency safeguard for high-risk AI systems. The provision, targeted for deletion under the proposed AI Omnibus package, is considered essential for the accountability and enforceability of Europe's landmark AI regulation and its wider risk-based framework.
As organizations prepare for the EU AI Act, a proposed simplification measure threatens to undermine one of the regulation's core accountability mechanisms. At the center of the debate is Article 49(2), a transparency safeguard that requires providers of AI systems classified under the high-risk categories in Annex III to register in a publicly accessible EU database if they choose to exempt themselves from the Act's most stringent AI Act obligations.
What Is at Stake in the AI Omnibus Proposal
Under Article 6(3) of the AI Act, providers may unilaterally determine that their system does not pose a significant risk of harm and opt out of the full set of high-risk requirements. The Article 49(2) safeguard exists specifically to prevent abuse of this self-assessment mechanism by ensuring that every exemption is recorded and publicly visible. The AI Omnibus, however, proposes deleting this registration requirement entirely, weakening key transparency measures that help regulators, citizens, and other stakeholders understand how certain AI systems are classified in practice.
In a joint open letter published by Access Now, signatories including European Digital Rights (EDRi), Amnesty Tech, and the European Consumer Organisation (BEUC) warned that removing the safeguard would “create a gaping loophole and undermine the core functioning of the AI Act.”
Why EU AI Act Transparency Matters for High-Risk AI Systems

The coalition outlines three key consequences of the proposed deletion. First, market surveillance authorities and national competent authorities would lose their ability to track how many companies are exempting themselves from high-risk obligations, making it nearly impossible to identify enforcement discrepancies across member states. Second, the removal would create a perverse incentive for providers (including downstream AI system providers) to sidestep the Act's requirements, ultimately undercutting responsible companies that invest in robust compliance and ethical governance. Third, the public, civil society, and researchers would have no way of knowing which providers have opted out, eroding public trust and removing a vital layer of accountability in the overall EU legal framework for artificial intelligence systems.
This issue also matters beyond classic high-risk use cases. In the AI Act’s tiered approach—ranging from unacceptable risk (and prohibition through prohibited AI practices) to high-risk, limited-risk AI systems (including limited-risk artificial intelligence and interactive AI systems with disclosure duties), and minimal-risk tools—transparency is the connective tissue that makes enforcement credible. That includes transparency around synthetic content and AI deepfakes, and how providers handle capabilities like emotion recognition where applicable, alongside other transparency obligations in the Act.
The European Commission's justification for the change estimates that the registration obligation costs an average of just €100 per company. The coalition argues that this saving is "severely disproportionate" to the risks introduced, warning that the change could effectively turn the AI Act into "a piece of optional self-regulation."
What This Means for Organizations Deploying AI in Europe
For businesses developing their AI regulatory compliance strategies, this debate carries significant implications—not only for providers, but also for deployers who operationalize AI in real-world settings. A strong EU AI Act transparency framework benefits the entire ecosystem by building public trust and ensuring a level playing field. If the safeguard is removed, it could become more difficult for organizations to demonstrate their commitment to trustworthy AI and differentiate themselves from competitors who may choose to bypass key AI Act obligations, undermining compliance “signals” that many buyers increasingly demand as part of procurement due diligence and public interest accountability.
Successfully navigating the EU AI Act requires a proactive approach to governance that goes beyond baseline compliance. This includes implementing comprehensive risk management systems and preparing for obligations such as conducting fundamental rights impact assessments for high-risk systems, as well as maintaining fit-for-purpose documentation and adopting practical guidelines and transparency techniques for disclosures and recordkeeping. For organizations building on general-purpose AI models (including GPAI models)—or integrating them into downstream products—clarity on roles and responsibilities, including for downstream AI system providers, will be essential.
A Growing Call for Accountability
The breadth of the coalition, spanning consumer protection groups, digital rights organizations, and academic researchers, underscores a growing global consensus that effective AI governance must be built on meaningful transparency. As EU legislators, the European Commission, and the newly formed AI Office finalize the details of the AI Omnibus (including aspects of the legislative drafting process), their decisions on the Article 49(2) safeguard will serve as a critical indicator of the EU’s resolve to uphold the integrity of its AI regulatory framework—including how it will be enforced for both high-risk systems and limited-risk AI systems, and how transparency will interact with parallel requirements such as GDPR transparency requirements.
Preserving this safeguard is not merely a procedural matter. It is a test of whether EU AI Act transparency will remain a foundational principle or become an optional standard, with lasting consequences for how AI is developed, deployed, and trusted across Europe—by providers, deployers, and ultimately natural persons affected by automated decisions.

