Nemko Digital Insights

Bridging AI Standards & Fundamental Rights: Building Trust in AI Systems

Written by Alicja Halbryt | February 2, 2026

Standards, Trust, and Rights in AI

The governance landscape for artificial intelligence is evolving rapidly. Regulations such as the European Union's Artificial Intelligence Act (AI Act) signal a clear shift toward outcome-oriented oversight, emphasising trustworthy AI that respects fundamental rights as AI systems increasingly affect people's lives, opportunities, and freedoms. Following the United Nations High Commissioner's report on human rights and technical standard-setting processes, organisations developing or deploying AI are now expected to demonstrate not only technical robustness and regulatory compliance, but also that their systems operate in ways that are fair, transparent, and respectful of rights in real-world contexts.

Technical standards play a central role in this transformation. They help translate high-level regulatory and policy expectations into operational requirements, offering structured approaches to risk management, documentation, transparency, accuracy, and conformity assessment. For many organisations, standards are the primary mechanism through which responsible AI practices are implemented at scale, across products, teams, and markets.

At the same time, a structural tension is becoming increasingly visible. AI standards help organisations structure responsible AI practices, but fundamental rights — the EU-law expression of internationally recognised human rights, and whose protection is explicitly recognised in the EU AI Act — are difficult to fully translate into standardised requirements. Yet respecting those rights is essential to building and maintaining AI trust.

 

Why Fundamental Rights Are Hard to Encode in Technical Standards

Technical standards are indispensable governance tools, but they face inherent limitations when it comes to capturing and safeguarding fundamental rights.

Figure 1: Technical standards and fundamental rights differ fundamentally across three dimensions: context (general vs. specific), assessment approach (measurable vs. normative), and temporal nature (static vs. dynamic). These structural gaps explain why standards alone cannot fully safeguard rights.

 

First, fundamental rights are highly context-dependent. Whether an AI system interferes with privacy, equality, freedom of expression, or human dignity depends on where, how, and on whom it is used. A facial recognition system deployed for access control in a private workplace raises different rights considerations than the same technology used in public surveillance. Standards, on the other hand, are necessarily general and abstract; they must apply across sectors and use cases, which limits their ability to reflect contextual nuance.

Second, standards prioritise auditable objectivity. To support conformity assessment, they rely on measurable, repeatable criteria. Fundamental rights, however, often involve normative judgement — questions of proportionality, fairness, necessity, and social acceptability that cannot always be reduced to technical thresholds. This tension has been widely recognised in analyses of AI regulation through standardisation.

Third, impact on rights often emerges dynamically. Many harms do not become visible at the design stage but arise through deployment, interaction, and scale. Static technical requirements struggle to capture these evolving effects, particularly where systems adapt or are repurposed over time.

Together, these factors explain why standards alone cannot fully guarantee respect for fundamental rights, even though they remain essential building blocks for AI governance.

 

What Technical Standards Still Do Well

Acknowledging these limitations does not diminish the value of standards — quite the opposite.

Technical standards:

  • Provide structure and consistency for responsible AI practices across organisations and markets
  • Establish a common language between developers, deployers, auditors, and regulators
  • Enable conformity assessment, which is a cornerstone of the EU AI Act's enforcement model
  • Influence system design choices around transparency, accuracy, robustness, and documentation

Standards can significantly shape how AI systems are built and operated, even if they cannot resolve all rights-related questions on their own. Inclusive and multi-stakeholder standardisation processes are particularly important. The Freedom Online Coalition has emphasised that technical standards should be developed with meaningful input from civil society and human rights experts to ensure they support, rather than undermine, rights protection.

 

Progress in Integrating Fundamental Rights into AI Standards

There are clear signs that the standardisation ecosystem is moving in the right direction.

In January 2026, CEN and CENELEC signed a Memorandum of Understanding with the EU Agency for Fundamental Rights (FRA). The agreement establishes cooperation within Joint Technical Committee 21 (JTC 21), which is responsible for developing AI standards in support of the EU AI Act. Through this collaboration, FRA contributes its expertise on fundamental rights to inform the development of AI standards.

This initiative reflects growing recognition that standards need to be informed by rights expertise, and that technical and legal perspectives must be better connected. While this does not eliminate the structural limitations discussed above, it strengthens standards as a foundation for trustworthy AI.

 

Why the Gap Between Standards and Rights Matters for AI Trust

Trust in AI is not created by documentation alone. It is shaped by outcomes, experiences, and perceptions.

Even when systems are technically compliant, trust can erode if people experience discrimination, opaque decision-making, or lack effective avenues for challenge and redress. The EU Agency for Fundamental Rights has consistently stressed that rights impacts must be assessed in practice, not assumed away through compliance artefacts.

International policy discussions similarly emphasise that technical standards should support, not replace, ongoing human and fundamental rights due diligence across the AI lifecycle.

 

The Risks of Over-Reliance on Standards

When organisations treat standards as a proxy for trust, several risks arise:

  • False assurance, where certification is mistaken for ethical acceptability
  • Regulatory exposure, as authorities increasingly focus on real-world impacts
  • Reputational damage, when harms occur despite formal compliance
  • Erosion of stakeholder trust, particularly among affected communities

 

From Compliance to Practice: What Organisations Actually Do

Organisations that successfully bridge standards and rights typically go beyond technical compliance by embedding rights considerations into governance processes.

In practice, this includes:

  • Conducting context-specific AI impact assessments
  • Assigning clear ownership for rights-related risks
  • Monitoring systems post-deployment
  • Establishing escalation and remediation mechanisms when harms occur

In practice, this often means integrating AI impact assessments into existing risk management processes, assigning clear ownership for rights-related risks, and reviewing system behaviour as part of post-deployment monitoring.

 

Conclusion

AI standards are indispensable. They structure responsible AI practices, enable regulatory compliance, and provide a shared foundation for governance. But they are not, and cannot be, a substitute for active engagement with fundamental rights.

Organisations that treat standards as a starting point — and complement them with practical, context-aware governance — are better positioned to manage risk, comply with evolving regulation, and, ultimately, earn and sustain trust in AI systems.