Skip to content
Building AI Trust
Dr. Pepijn van der LaanOctober 20, 20254 min read

Don't Slow Down: The Pragmatic Way to Build AI Trust

​Most organizations are currently in the early stages (often Stage 1 or 2) of their AI maturity journey. At this stage, your organization is characterized by scattered initiatives, low organizational AI literacy, and reactive, inconsistent governance.

 

5 Stages of AI Maturity

 

According to the recently published World Economic Forum playbook for responsible AI innovation, an increasing number of companies seems to acknowledge that they are just at the beginning of their AI maturity journey.

In most organizations, the initial push for artificial intelligence is focused on one thing: getting live. Either driven by teams of eager internal employees that like to tinker with new technology, or by external consultants that have promised the world to the Board.

This is understandable. You need to show impact and ROI. Even though you understand the value of a comprehensive governance framework, but you can’t afford to wait.. However, leading organizations know that rushing straight to value without considering AI trust and ethics isn't entrepreneurial; it’s reckless.

A comprehensice approach takes into account eight essential building blocks for AI success.

 

Organization Building Blocks for AI Success

 

When it comes to disruptive technology like AI, no-one has the luxury to 'hit pauze' to put fundamentals in place. It is all about finding the right balance. Don't slow down, but don't ignore AI Trust either.

 

The High Cost of Ignoring Trust

The AI Trust gap is aggravated by a deep lack of understanding between risk professionals and ethics experts on the one hand, and tech-oriented product team on the other hand.

Development teams are often so focused on the power of AI to create business value that they overlook potential downsides, risks and the need to build trust with users and stakeholders. Risk and ethics experts are often so preoccupied with everything that can go wrong that they have a hard time getting to a balanced trade-off between risk and value.

 

Compounding this is a serious expertise mismatch:
  1. Ethics experts often lack the technical knowledge and lived experience of developers, making their judgments difficult to translate into code and workable practices.
  2. Technical experts often lack the necessary background in psychology, sociology, or philosophy to fully grasp the full implications of what they are building.

For example, recent research by University of Maine confirms that developer’s knowledge of AI Ethics is patchy at best.

This AI Trust deficiency contributes directly to high project failure rates. AI initiatives led purely by technical acceleration often underperform, exposing companies to reputational and legal damage. Or, maybe just as bad, they fail to get to scale beyond an initial pilot, either blocked by a mistrusting CRO or not trusted by end-users.

Ultimately, this leads to wasted investments. Ignoring the fundamentals of data quality and fairness directly erodes stakeholder trust.

 

Overcoming the Critical Hurdles

Integrating AI Trust into the development lifecycle is crucial, but it faces several practical hurdles. These challenges are often less about technical difficulty and more about organizational inertia:

1. Practical Implementation: How do you translate abstract concepts like fairness and accountability into concrete, measurable development practices? Integrating these considerations into existing, fast-paced agile workflows is challenging and often seen as a roadblock.

2. Organizational and Cultural Factors: The biggest barrier is the conflict of priorities. Development teams prioritize performance and speed, and if AI Trust doesn’t have a clear advocate or clear incentives, it’s always deprioritized. Teams don't fail on AI trust because they don't want to do the right thing; they fail because they’re given other priorities.

The key is realizing you need to build trust in such a way that it doesn't slow you down. You must bridge the gap between accelerationists and fear-mongers by embedding AI trust expertise directly into the process, rather than relying on an external, strict judge.

 

A Pragmatic Solution: Build Trust While You Fly

The business doesn't wait. While you should certainly work on your overall AI maturity, you can and must take pragmatic steps today to secure early wins and create examples of trustworthy AI.

The most effective first step is simple: Add a dedicated AI Trust expert to your Dev teams.

This person becomes the advocate for ethical considerations during agile rituals, refining user stories, and prioritizing features. They help you embed governance and quality management into your operations early and efficiently. This approach allows you to grow with confidence, matching your pace without sacrificing safety.

As your AI Trust partner, we can structure your project to address AI Trust across the entire lifecycle and empower the team to find the right balance between cavalier and caution:

 

Lifecycle Stage Focus Areas
Requirements Define regulatory context, success factors, potential risks, and non-negotiables.
Development Integrate trust into user story refinement, feature prioritization, and testing/red teaming.
Deployment Provide clear deployer guidance and user enablement; conduct conformity assessments.
Operation Establish processes for regulatory monitoring, track AI Trust performance, and schedule periodic audits.

 

Structured support where it matters most

 

The payoff is significant and immediate: reduced risks, continuous improvement, resource efficiency, and confident teams. You can create the business value you need without compromising your reputation or running afoul of the constantly evolving regulatory landscape.

 

Ready to Operationalize AI Trust?

If you're ready to stop guessing and start proactively integrating AI trust, governance, and quality management into your operations, our team can provide the practical tools and expert insight that match your pace.

​Join us on October 30th for our webinar where we’ll explore this real-world case in the Education sector and show how building AI Trust can accelerate, not slow down, your innovation.

Reach out today to learn how to move past the hurdles and build an AI future you can trust.

REGISTER HERE:

 

avatar
Dr. Pepijn van der Laan
Global Technical Director, AI Governance | Nemko Group With two decades of experience at the intersection of AI, strategy, and compliance, Pep has led groundbreaking work in AI tooling, model risk governance, and GenAI deployment. Previously Director of AI & Data at Deloitte, he has advised multinational organizations on scaling trustworthy AI—from procurement chatbots to enterprise-wide model oversight frameworks.

RELATED ARTICLES