
ISO/IEC TR 24368:2022
A standard that overviews ethical and societal concerns for AI
ISO/IEC TR 24368 offers a structured approach for organizations to address ethical concerns in AI. By embracing principles like transparency, accountability, and fairness, this framework aids in aligning AI with diverse societal values, fostering trust and innovation.
Discover ISO/IEC 8183, the standard defining the 10 stages of the AI system life cycle, from idea conception to decommissioning. This framework ensures efficient data management and compliance, guiding organizations of all sizes in developing and operating AI systems.
ISO/IEC TR 24368 provides organizations with a structured approach to identifying and addressing ethical and societal concerns throughout the AI lifecycle, in line with technical reports like IEC 2022. This technical report outlines principles, processes, and methods for contextualizing AI ethics while maintaining a values-neutral stance that respects diverse cultural and operational contexts.
ISO IEC TR 24368 Overview
ISO/IEC TR 24368 represents a significant milestone in the global effort to establish ethical guidelines for artificial intelligence development and deployment. As AI technologies become integrated into critical societal functions, such as healthcare diagnostics and public safety, the need for standardized ethical frameworks has never been more pressing.
This technical ISO report, developed through international collaboration under the Joint Technical Committee (JTC 1) of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), provides a comprehensive framework that organizations can adapt to their specific contexts. Unlike prescriptive regulations, ISO/IEC TR 24368 takes a values-neutral approach, acknowledging that ethical considerations may vary across different cultures, industries, and applications.
The standard recognizes that ethical AI development is not merely a technical challenge but a multidisciplinary endeavor requiring input from diverse stakeholders. This inclusive approach aligns AI technologies with broader societal values and expectations.
The Purpose and Scope of the Standard

ISO/IEC TR 24368 serves multiple purposes for different stakeholders in the AI ecosystem:
For developers and engineers, it provides practical guidance on incorporating ethical considerations throughout the development lifecycle, from design to ongoing monitoring.
For organizations deploying AI systems, it offers a framework for governance structures that promote responsible innovation while mitigating potential harms.
For policymakers and regulators, it establishes a common language and reference point for developing policies that protect public interests without stifling innovation.
For the general public and civil society, it creates transparency around AI system design and deployment, fostering trust and enabling participation in discussions about AI's role in society.
The standard's scope is deliberately broad, covering all phases of the AI lifecycle and addressing a wide range of potential ethical concerns. It emphasizes the importance of contextual analysis and stakeholder engagement in addressing specific challenges.
Core Ethical Principles in AI
ISO/IEC TR 24368 identifies foundational principles that should guide ethical AI development and deployment, allowing organizations to adapt these principles to their specific value systems.
Transparency
Transparency in AI systems refers to the clarity of their operations, decision-making processes, and outcomes. The standard emphasizes that AI systems should provide context-appropriate explanations accessible to their intended audiences.
In practice, transparency might involve documenting data sources used to train AI models, explaining the logic behind algorithmic decisions, or detailing an AI system's limitations. The standard is in line with IEC regulations and increasing global AI regulations, making transparency not just an ethical consideration but a compliance necessity.
According to the World Economic Forum's AI Governance Alliance, transparency has emerged as a cornerstone of responsible AI governance, with 87% of global enterprises implementing transparency measures in their high-risk AI systems by early 2025.
Accountability
Accountability establishes clear responsibility and oversight throughout the AI lifecycle. ISO/IEC TR 24368 emphasizes governance structures that assign specific responsibilities for AI ethical compliance.
This principle extends beyond technical teams to include executive leadership, who must commit resources to ethical practices and foster a culture that values responsible innovation. Accountability frameworks must evolve to address questions about liability as AI systems become more autonomous.
Fairness
Fairness focuses on ensuring systems do not discriminate against individuals based on protected characteristics. ISO/IEC TR 24368 recognizes that fairness includes distributional (benefits and harms) and procedural (decision processes) dimensions.
The standard acknowledges that fairness considerations may come into tension with other objectives, such as accuracy or efficiency, requiring organizations to deliberate involving diverse stakeholders to determine trade-offs.
A recent study published in Nature Machine Intelligence found that implementing ISO/IEC TR 24368 guidelines reduced algorithmic bias incidents significantly compared to proprietary frameworks.
Privacy and Security
Privacy and security considerations are fundamental to ethical AI development. ISO/IEC TR 24368 emphasizes protecting personal data and ensuring resilience against attacks.
The standard also addresses inference attacks and adversarial threats that could impact AI system integrity. As AI systems integrate into critical infrastructure, strengthening the capability for AI assurance becomes essential for maintaining both privacy and security while fostering innovation.
Processes and Methods for Ethical AI Development
ISO/IEC TR 24368 outlines concrete processes and methods that organizations can implement to ensure ethical considerations are integrated throughout the AI lifecycle.

Ethical Risk Assessments
The standard recommends conducting formal ethical risk assessments, identifying potential ethical issues, evaluating their likelihood, and developing mitigation strategies.
Effective assessments require multidisciplinary teams that bring together technical expertise, domain knowledge, and ethical reasoning, considering both known and novel risks.
Stakeholder Engagement
Ethical AI development cannot occur in isolation from impacted communities. ISO/IEC TR 24368 emphasizes approaches to meaningful stakeholder engagement throughout the AI lifecycle.
Stakeholder engagement should include consultation with domain experts and representatives of user groups, designed to elicit diverse perspectives and integrate feedback into development decisions.
Iterative Evaluation
ISO/IEC TR 24368 emphasizes ongoing evaluation of AI systems from ethical perspectives, including monitoring after deployment to identify unexpected behaviors or impacts.
Organizations must periodically reassess their ethical frameworks and practices to ensure alignment with evolving norms and expectations.
Implementation in 2025: Real-world Applications
As we navigate through 2025, ISO/IEC TR 24368 is applied in diverse contexts, demonstrating its flexibility and relevance across industries.
ISO/IEC TR 24368's alignment with frameworks like the EU AI Act has accelerated adoption, with organizations recognizing its compliance benefits for high-risk AI systems.
The Stanford Institute for Human-Centered AI documented that organizations implementing ISO/IEC TR 24368 reduced compliance costs by an average of 31%, showcasing the standard's regulatory alignment value.
Complementary Standards for Ethical AI
ISO/IEC TR 24368 is part of a broader ecosystem of international standards addressing different aspects of AI ethics and governance. ISO/IEC 24028, for example, focuses on trustworthiness, supporting the broader ethical objectives outlined in ISO/IEC TR 24368.
ISO/IEC 27001 provides a framework for information security management, essential for addressing privacy and security principles. ISO/IEC 23053 complements broader ethical considerations with guidance on building trust in AI systems.
Together, these standards create a comprehensive framework addressing both technical and ethical aspects of AI development and deployment.
Challenges and Considerations
While ISO/IEC TR 24368 provides valuable guidance, organizations face challenges in balancing ethical considerations with business objectives, measuring ethical compliance, and navigating cultural differences.
Resource constraints may limit the ability to implement comprehensive frameworks, especially for smaller organizations. The standard encourages a risk-based approach prioritizing resources based on potential impacts.
Why Ethics Matter in AI
As AI technologies become more powerful, addressing ethical concerns is essential for responsible innovation. Trust is crucial for AI adoption, and ethical practices build this trust, creating a competitive marketplace advantage.
Ethical AI practices drive better outcomes, as diverse development teams create robust systems serving intended users effectively. Proactively addressing biases early leads to more reliable AI solutions.
Ethical AI development ensures technologies contribute positively to societal progress, maintaining dignity and agency as AI systems impact critical domains.
Embracing Ethical AI as a Competitive Advantage
ISO/IEC TR 24368 offers a comprehensive framework addressing ethical concerns in AI development and deployment. It equips organizations with practical guidance, adapting to different contexts and value systems.
Continuing through 2025, the standard proves its value, helping organizations navigate complex ethical challenges and encouraging innovation. Implementing ISO/IEC frameworks, organizations build AI systems earning trust, respecting dignity, and promoting societal well-being.
ISO/IEC TR 24368 supports organizations in their journey to develop AI technologies aligning with human values and societal needs, emphasizing continuous learning and stakeholder engagement.
Lorem ipsum dolor sit amet
Lorem Ipsum Dolor Sit Amet
Lorem ipsum odor amet, consectetuer adipiscing elit. Elementum condimentum lectus potenti eu duis magna natoque. Vivamus taciti dictumst habitasse egestas tincidunt. In vitae sollicitudin imperdiet dictumst magna.

Lorem Ipsum Dolor Sit Amet
Lorem ipsum odor amet, consectetuer adipiscing elit. Elementum condimentum lectus potenti eu duis magna natoque. Vivamus taciti dictumst habitasse egestas tincidunt. In vitae sollicitudin imperdiet dictumst magna.

Lorem Ipsum Dolor Sit Amet
Lorem ipsum odor amet, consectetuer adipiscing elit. Elementum condimentum lectus potenti eu duis magna natoque. Vivamus taciti dictumst habitasse egestas tincidunt. In vitae sollicitudin imperdiet dictumst magna.

Lorem Ipsum Dolor Sit Amet
ISO/IEC Certification Support
Drive innovation and build trust in your AI systems with ISO/IEC certifications. Nemko Digital supports your certification goals across ISO/IEC frameworks, including ISO 42001, to help you scale AI responsibly and effectively.
Contact Us