
ISO/IEC TR 24028:2020
This standard establishes an approach for measuring the trustworthiness of AI systems.
Explore how ISO/IEC TR 24028 sets the standard for AI trustworthiness. This framework guides organizations in establishing reliable, ethical, and compliant AI systems, fostering stakeholder trust and innovation. Embrace transparency, risk management, and stakeholder engagement to ensure AI systems that are both powerful and responsible.
Information technology and artificial intelligence have become deeply integrated into our daily lives and business operations. As AI systems take on increasingly critical roles across industries, understanding ai system vulnerabilities and the need for trustworthy AI has never been more important. ISO/IEC TR 24028, published in May 2020, provides a comprehensive framework for understanding and implementing trustworthiness in AI systems, highlighting both existing approaches and potential application.
Understanding ISO IEC TR 24028
ISO/IEC TR 24028, titled "Information technology – Artificial intelligence – Overview of trustworthiness in artificial intelligence," is a technical report that thoroughly analyzes factors impacting the trustworthiness of AI systems. This international standard serves as a crucial resource for organizations seeking to develop and deploy reliable AI technologies. It reflects the collaborative work of the joint technical committee, notably IEC JTC 1, ensuring alignment with fundamental principles.

The standard defines trustworthiness as "the ability to meet stakeholders' expectations in a verifiable way." This definition spans diverse AI systems, technologies, and application domains in today's market. By establishing this foundation, ISO/IEC TR 24028 provides a common language for discussing and evaluating AI trustworthiness across different contexts, bridging existing approaches with possible approaches and newer business applications.
The technical report surveys various approaches to establishing trust in AI systems through key principles including:
- Transparency in system operations
- Explainability of AI decisions
- Controllability of AI behaviors
- Engineering safeguards against potential threats
- Assessment methodologies for availability, resiliency, reliability, accuracy, safety, security, and privacy
The Growing Importance of AI Trustworthiness in 2025
As we move through 2025, our dependence on artificial intelligence continues to accelerate across sectors. From healthcare diagnostics to financial services, transportation systems to critical infrastructure, AI systems are making decisions that directly impact human lives and societal well-being. This increasing reliance makes trustworthiness not just a technical consideration but a fundamental business and ethical imperative, essential for human intelligence integration.
Organizations implementing AI systems must now navigate complex regulatory landscapes that have evolved significantly since ISO/IEC TR 24028's initial publication. Regulatory frameworks like the EU AI Act have moved from proposal to implementation, creating concrete legal requirements for AI trustworthiness. In this environment, ISO/IEC TR 24028 serves as an invaluable guide for organizations seeking to align their AI development practices with both regulatory requirements and stakeholder expectations, supported by the subcommittee sc 42 initiatives.
The standard's approach to NIST RMF alignment has become particularly relevant as organizations seek to implement comprehensive risk management frameworks for their AI systems. By addressing both technical systems and governance aspects of AI trustworthiness, ISO/IEC TR 24028 helps organizations build AI systems that are not only technically sound but also ethically responsible and legally compliant.
Key Components of AI Trustworthiness
ISO/IEC TR 24028 identifies several critical dimensions of AI trustworthiness that organizations must address:
Transparency and Explainability
Transparency in AI systems involves clear documentation and communication about data quality, storage, handling procedures, design choices, and the parties involved in system development. The standard emphasizes that thorough documentation is essential for establishing trustworthiness. This transparency facilitates future standards work and contributes to the trust within the standards community.
For AI systems based on machine learning, transparency also extends to the fairness of the system. This is particularly important given the stochastic nature of many AI systems, where outcomes may not be fully deterministic. Organizations implementing ISO/IEC 23053 for framework design can further enhance their approach to AI transparency by following established best practices for AI system architecture.
Explainability refers to the ability to provide understandable explanations for AI decisions and recommendations. According to a recent study by MIT Technology Review, organizations that prioritize explainable AI see 34% higher user adoption rates and 28% greater stakeholder trust in their AI systems.
Risk Assessment and Mitigation
ISO/IEC TR 24028 provides guidance on identifying and addressing potential vulnerabilities in AI systems. This includes systematic approaches to risk assessment, techniques for mitigating identified risks, methods for ensuring system resilience, and approaches to maintaining it systems security and privacy.
The standard emphasizes that risk preparedness must address not only technical security issues but also the broader impacts that AI systems may have on users, societies, and the environment. A responsible AI system must be aware of its potential impacts and emphasize resiliency as a core design principle.
According to the National Institute of Standards and Technology (NIST), organizations that implement comprehensive AI risk management frameworks experience 42% fewer critical AI incidents and 37% faster recovery times when incidents do occur.
Reliability and Accuracy
The reliability and accuracy of AI systems are fundamental to their trustworthiness. ISO/IEC TR 24028 discusses approaches to ensuring consistent performance across different operating conditions, maintaining accuracy in the face of novel or unexpected inputs, establishing appropriate confidence levels for AI outputs, and implementing robust testing and validation procedures.
Organizations must establish clear metrics for measuring reliability and accuracy, along with processes for continuous monitoring and improvement. This is particularly important for AI systems operating in critical domains where errors could have significant consequences.
Implementing ISO IEC TR 24028 in Your Organization
Adopting ISO/IEC TR 24028 requires a systematic approach that spans the entire AI development lifecycle. Here are key steps organizations can take:
1. Establish a Trustworthiness Framework
Begin by creating a comprehensive framework for AI trustworthiness that aligns with your organization's values, risk tolerance, and regulatory requirements. This framework should define clear roles and responsibilities for trustworthiness, establish governance structures for oversight, specify processes for risk assessment and mitigation, and outline approaches to transparency and documentation.
2. Implement Technical Safeguards
Based on the guidance in ISO/IEC TR 24028, implement technical safeguards to enhance the trustworthiness of your AI systems, adopting robust testing methodologies for reliability and accuracy, implementing explainability techniques appropriate to your use cases, establishing monitoring systems to detect and address potential issues, and designing for resilience and graceful degradation.
3. Develop Stakeholder Engagement Processes
Trustworthiness ultimately depends on meeting stakeholder expectations. Develop processes for identifying and understanding stakeholder needs and concerns, communicating transparently about AI capabilities and limitations, gathering and responding to feedback, and demonstrating accountability when issues arise.
4. Conduct Regular Assessments
ISO/IEC TR 24028 emphasizes the importance of ongoing assessment. Implement regular audits and reviews to evaluate compliance with your trustworthiness framework, identify emerging risks or concerns, assess the effectiveness of existing safeguards, and update approaches based on evolving best practices.
Benefits of Adopting ISO IEC TR 24028
Organizations that effectively implement ISO/IEC TR 24028 can realize significant benefits:
Enhanced Stakeholder Trust
By demonstrating a commitment to trustworthy AI, organizations can build stronger relationships with customers, partners, employees, and regulators. This trust translates into competitive advantage in markets where AI ethics and responsibility are increasingly important differentiators.
Improved Risk Management
The standard's comprehensive approach to risk assessment and mitigation helps organizations identify and address potential issues before they become critical problems. This proactive stance can prevent costly incidents and reputation damage, aligning with the specific standardization gaps identified.
Regulatory Readiness
As AI regulations continue to evolve globally, organizations that align with ISO/IEC TR 24028 are better positioned to meet emerging compliance requirements. The standard's principles are consistent with the direction of regulatory frameworks worldwide.
Innovation Enablement
Rather than constraining innovation, trustworthiness creates the foundation for sustainable AI advancement. By addressing key concerns around reliability, safety, and ethics, organizations can pursue more ambitious AI applications with greater confidence.
The Future of AI Trustworthiness
As we look beyond 2025, the importance of AI trustworthiness will only continue to grow. Emerging technologies like autonomous systems, general-purpose AI, and human-AI collaboration tools will raise new questions and challenges for trustworthiness, emphasizing future standards work.
ISO/IEC TR 24028 provides a flexible framework that can evolve with these technological developments. By focusing on fundamental principles rather than specific technologies, the standard offers enduring guidance even as AI capabilities advance.
Organizations that make trustworthiness a core element of their AI strategy today will be better positioned to navigate the opportunities and challenges of tomorrow's AI landscape. By building on the foundation provided by ISO/IEC TR 24028, they can create AI systems that not only deliver technical performance but also earn and maintain the trust of all stakeholders.
Future Implications
ISO/IEC TR 24028 represents a significant milestone in the development of trustworthy AI. By providing a comprehensive survey of approaches to trustworthiness and a framework for assessment and improvement, the standard helps organizations navigate the complex challenges of responsible AI development.
As AI becomes increasingly integrated into critical systems and decision processes, trustworthiness is no longer optional—it's essential. Organizations that embrace the principles outlined in ISO/IEC TR 24028 will be better equipped to build AI systems that are not only powerful and effective but also worthy of the trust placed in them.
By prioritizing transparency, risk preparedness, reliability, and stakeholder engagement, organizations can create AI systems that deliver value while respecting important ethical principles and societal values. In doing so, they contribute to building a future where AI serves as a positive force for innovation and human flourishing.