Skip to content
AI Trust Mark in Practice
Mónica Fernández PeñalverNovember 17, 20255 min read

Why Waiting for AI Standards Could Cost You: Preparing for the EU AI Act’s 2026 Deadline Now

​The EU AI Act is now law. Its deadlines are fixed, but the ecosystem around it is still evolving. While organizations wait for harmonized standards and detailed guidance, the clock keeps ticking toward 2026 — when many of the Act's obligations will become enforceable.

At Nemko Digital, we're seeing the same pattern that unfolded years ago with the Radio Equipment Directive (RED): a regulation arrives first, standards lag behind, and many companies lose valuable time waiting for clarity that never fully comes. Those that start documenting early, even before the standards are finalized, end up far ahead when conformity assessments begin.

The message is simple: you can't afford to wait.

 

The "Productification" of AI

For years, AI has been treated as a technology — something abstract, experimental, or confined to the digital realm. The EU AI Act changes that. It acknowledges that AI should be regulated as a product or product component.

Once an AI system becomes part of a regulated product — for example, embedded in a medical device, vehicle, or household appliance — it inherits the same rigor, documentation, and safety expectations as hardware or software components. But given the nature of the technology, similar expectations are now applied to certain stand-alone AI systems.

For instance, a system that categorizes individuals based on biometric features can operate independently or be embedded in a product, such as a CCTV camera. In that case, the biometric component should be regulated with the same seriousness as the brakes of a car. After all, would you want to test a car's brakes only when you're ready to drive it?

Nevertheless, even though the AI Act subjects the technology to rigorous safety requirements comparable to those for traditional products, it does not use the same terminology as other product regulations. Its horizontal, cross-sector approach and scalable application can lead to confusion. As a result, product compliance teams — for example, those managing CCTV camera compliance — often find themselves uncertain and in need of additional guidance.

 

Bridging AI and Product Compliance

This is where Nemko's AI Trust Mark comes in. Leveraging our 90-year history in the testing, inspection, and certification (TIC) industry, we have designed a mark that bridges the divide between AI and product compliance — integrating AI governance with established product conformity disciplines.

Instead of waiting for harmonized standards — which are expected to be delayed, leaving providers and manufacturers little time to implement corrective measures — the AI Trust Mark connects what is available today with what is expected to come.

It links the EU AI Act's legal obligations with recognized management frameworks such as ISO/IEC 42001 and NIST's AI Risk Management Framework. These frameworks already define strong governance principles — accountability, transparency, data quality, and risk management — that closely mirror the AI Act's requirements.

By aligning your products with these frameworks now, you can start collecting the evidence and documentation that will later serve as input for conformity assessments under the AI Act.

But the Trust Mark is not only about preparing for future conformity assessments. It also provides a clear and credible way to communicate the trust and rigor you have built into your product development processes. It represents evidence of a completed audit that fits seamlessly into existing quality and compliance systems.

In essence, working toward the AI Trust Mark helps your teams collect the right evidence, trace design decisions, and prepare for regulatory scrutiny — all while strengthening internal trust and cross-functional coordination.

 

Learning from RED: The Standards Lag

The AI Act's trajectory mirrors what happened with the Radio Equipment Directive (RED). Many organizations delayed their compliance efforts until harmonized standards were published. When the standards finally arrived, they realized that implementation took far longer than expected — and conformity assessments quickly became a bottleneck.

The same pattern is emerging again. Standards will eventually be published, but not soon enough to eliminate uncertainty before the 2026 deadlines. Organizations that wait for perfect guidance will find themselves rushing to retrofit governance, documentation, and testing at the last minute.

The smarter approach is to act now — using frameworks and marks like Nemko's as bridges across this gap.

 

Practical Steps to Take in 2025

While standards bodies work toward harmonization, there is much you can already do to prepare for the 2026 and 2027 enforcement phases. Based on our client projects, here's what proactive organizations are implementing:

  1. Integrate AI and product compliance workflows. Treat your AI system as part of the product — not as a separate project. Build shared documentation structures that cover both domains.
  2. Adopt ISO/IEC 42001 principles early. Even partial implementation helps establish governance, risk management, and traceability aligned with the AI Act.
  3. Perform an AI impact assessment. Although not mandatory under the AI Act, an AI impact assessment can help identify and implement the most effective risk management system for your AI-embedded products.
  4. Conduct internal pre-assessments. Identify documentation and governance gaps now, before conformity assessments become mandatory.

By taking these steps, you will build an evidence base that supports both your AI Act obligations and your broader quality assurance strategy.

 

The Payoff of Early Action

Organizations that begin structuring and capturing AI evidence in 2025 will have a dramatically smoother path in 2026. Instead of scrambling to assemble technical files, they'll already have coherent documentation and risk management systems aligned with both legal and technical expectations.

More importantly, they'll have bridged the internal gap between AI and product compliance — creating a unified compliance culture that extends beyond any single regulation.

 

Join the Advanced Session: "AI Trust Mark in Practice"

To help organizations take these next steps, Nemko Digital is hosting an advanced webinar, "AI Trust Mark in Practice – Turning 2026 AI Act Pressure into Action."

 

 

In this session, we'll connect the AI Trust Mark to the EU AI Act's upcoming obligations, discuss how it bridges the gap between AI and product compliance, and share lessons from real client projects. You'll also receive a practical checklist to accelerate your own assessment process.

Reserve your seat — limited spots available!

This is an advanced session designed for professionals already working on AI assurance or product compliance who want to move from awareness to concrete action.

 

avatar
Mónica Fernández Peñalver
Mónica has actively been involved in projects that advocate for and advance Responsible AI through research, education, and policy. Before joining Nemko, she dedicated herself to exploring the ethical, legal, and social challenges of AI fairness for the detection and mitigation of bias. She holds a master’s degree in Artificial Intelligence from Radboud University and a bachelor’s degree in Neuroscience from the University of Edinburgh.

RELATED ARTICLES