
ISO/IEC TR 24368:2022
A standard that overviews ethical and societal concerns for AI
ISO/IEC TR 24368 offers a structured approach for organizations to address ethical concerns in AI. By embracing principles like transparency, accountability, and fairness, this framework aids in aligning AI with diverse societal values, fostering trust and innovation.
Discover ISO/IEC 8183, the standard defining the 10 stages of the AI system life cycle, from idea conception to decommissioning. This framework ensures efficient data management and compliance, guiding organizations of all sizes in developing and operating AI systems.
ISO/IEC TR 24368 provides organizations with a structured approach to identifying and addressing ethical and societal concerns throughout the AI lifecycle, in line with technical reports like IEC 2022. This technical report outlines principles, processes, and methods for contextualizing AI ethics while maintaining a values-neutral stance that respects diverse cultural and operational contexts.
Overview of ISO/IEC TR 24368:2022
PD ISO/IEC TR 24368:2022 represents a pivotal advancement in artificial intelligence governance, establishing the first internationally recognized standard specifically dedicated to AI ethical and societal concerns. This technical report emerged from extensive collaboration between leading AI researchers, ethicists, policymakers, and industry practitioners under the Joint Technical Committee (JTC 1) of ISO and IEC.
Unlike prescriptive regulations, ISO/IEC TR 24368:2023 adopts a values-neutral approach that acknowledges diverse cultural contexts and organizational values while providing universal principles for ethical AI development. This flexibility makes the standard applicable across industries, from healthcare and finance to autonomous vehicles and smart cities.
The standard addresses critical gaps in AI governance by providing:
- Structured methodologies for ethical risk assessment throughout the AI lifecycle
- Clear guidance on stakeholder engagement and transparency requirements
- Practical frameworks for balancing innovation with societal responsibility
- Integration pathways with complementary standards like ISO/IEC 23894:2023-02 and ISO/IEC 8183

Importance of Ethical AI in 2025
As AI applications become increasingly sophisticated and pervasive, addressing ethical concerns has evolved from a moral imperative to a business necessity. Organizations deploying AI systems face mounting pressure from regulators, customers, and stakeholders to demonstrate responsible practices.
The EU AI Act implementation has accelerated global adoption of structured ethical frameworks, with organizations recognizing that proactive ethical compliance reduces regulatory risk and builds competitive advantage. Research from the Stanford Institute for Human-Centered AI indicates that companies implementing ISO/IEC TR 24368 frameworks achieve 31% lower compliance costs when adapting to new regional AI regulations.
Key drivers for ethical AI adoption include:
- Regulatory compliance with emerging AI laws worldwide
- Risk mitigation against bias, discrimination, and privacy violations
- Trust building with customers and stakeholders
- Competitive differentiation in conscious markets
- Talent attraction as top professionals prefer ethically-minded organizations
Core Ethical Principles in AI
Transparency and AI System Transparency
AI system transparency forms the foundation of trustworthy artificial intelligence. ISO/IEC TR 24368 emphasizes that transparency requirements should be proportional to the AI system's risk level and impact on individuals and society.
Technical transparency involves documenting model architectures, training data sources, and decision-making processes. Operational transparency requires clear communication about system limitations, intended use cases, and potential failure modes.
Organizations implementing transparency measures typically address:
- Algorithmic explainability for high-stakes decisions
- Data provenance and quality documentation
- Model performance metrics across different demographic groups
- System boundaries and interaction protocols
Modern AI applications require sophisticated transparency approaches. For autonomous vehicles, this might include real-time decision explanations. For hiring algorithms, it involves bias auditing and demographic impact assessments.
Accountability Throughout the AI Lifecycle
Accountability in AI extends beyond technical teams to encompass organizational leadership, governance structures, and oversight mechanisms. ISO/IEC TR 24368 emphasizes that accountability frameworks must address both preventive measures and reactive responses to AI-related incidents.
Effective accountability systems include:
- Clear role definitions for AI ethics oversight
- Executive responsibility for ethical AI strategy
- Incident response protocols for addressing AI harms
- Regular auditing and performance monitoring
- Documentation standards for decision traceability
Organizations pursuing AI regulatory compliance strategies must establish accountability measures that satisfy both internal governance requirements and external regulatory expectations.
Fairness and Bias Mitigation
Fairness in AI systems requires proactive identification and mitigation of AI-related risks that could result in discriminatory outcomes. ISO/IEC TR 24368 recognizes that fairness is multidimensional, encompassing individual fairness, group fairness, and counterfactual fairness.
Practical fairness implementation involves:
- Diverse dataset curation to represent all affected populations
- Algorithmic bias testing across protected characteristics
- Continuous monitoring of system outputs for discriminatory patterns
- Stakeholder engagement with potentially affected communities
- Remediation protocols for addressing identified biases
Privacy and Security Safeguards
Privacy protection in AI systems requires comprehensive approaches that address both direct data use and inference risks. ISO/IEC TR 24368 emphasizes privacy-by-design principles that embed protection measures throughout system architecture.
Essential privacy safeguards include:
- Data minimization to limit collection to necessary information
- Purpose limitation ensuring data use aligns with stated objectives
- Consent mechanisms appropriate to the AI application context
- Anonymization techniques that resist re-identification attacks
- Access controls preventing unauthorized data exposure
Security considerations extend to protecting AI models themselves against adversarial attacks, data poisoning, and model extraction attempts. Organizations must implement robust cybersecurity measures that address both traditional threats and AI-specific vulnerabilities.
Structured Approach for Ethical AI Development
Ethical Risk Assessments
ISO/IEC TR 24368 outlines systematic approaches for identifying and evaluating ethical risks throughout AI system development. These assessments should occur at key milestones: initial design, data collection, model training, validation, deployment, and ongoing operations.
Comprehensive risk assessments address:
- Individual impact on affected persons' rights and freedoms
- Societal consequences including cultural and economic effects
- Technical vulnerabilities that could enable misuse or harm
- Operational risks from human-AI interaction failures
- Downstream effects from AI system integration with other technologies
Risk assessment methodologies should incorporate both quantitative metrics and qualitative stakeholder input, creating holistic evaluations that capture diverse perspectives on potential impacts.
Stakeholder Engagement
Meaningful stakeholder engagement represents a cornerstone of ethical AI development under ISO/IEC TR 24368. This process involves identifying all parties potentially affected by an AI system and creating mechanisms for ongoing input throughout the system lifecycle.
Effective engagement strategies include:
- Multi-stakeholder workshops bringing together diverse perspectives
- Community advisory boards for ongoing guidance and oversight
- User testing with representative populations
- Expert consultation with domain specialists and ethicists
- Public comment periods for transparency and input
Stakeholder engagement should be documented and its outcomes integrated into system design decisions, creating accountability for how different perspectives influence development choices.
Iterative Evaluation and Continuous Monitoring
Ethical considerations evolve as AI systems operate in real-world environments and as societal norms develop. ISO/IEC TR 24368 emphasizes iterative evaluation approaches that enable continuous improvement and adaptation.
Monitoring frameworks typically include:
- Performance metrics across different user groups and use cases
- Feedback mechanisms for users to report concerns or suggestions
- Regular audits by internal teams and external evaluators
- Impact assessments measuring actual versus predicted outcomes
- Adaptation protocols for addressing identified issues
This iterative approach ensures AI systems remain aligned with ethical principles and stakeholder expectations throughout their operational lifetime.
The Role of Complementary Standards
ISO/IEC TR 24368 operates within a broader ecosystem of AI governance standards, each addressing specific technical or ethical aspects of AI system development and deployment.
ISO/IEC 23894 provides frameworks for AI risk management, offering technical approaches to identifying and mitigating risks that complement the ethical focus of TR 24368. Together, these standards enable comprehensive risk management that addresses both technical failure modes and ethical concerns.
IEC 22989 establishes AI terminology and concepts, creating shared vocabulary that facilitates clear communication about ethical requirements across multidisciplinary teams. This foundational standard ensures consistent interpretation of ethical principles.
Organizations implementing ISO/IEC 42001 AI management systems can integrate ISO/IEC TR 24368 ethical frameworks into their broader governance structures, creating comprehensive approaches to responsible AI development.
IEC 24775 addresses AI bias and its mitigation, providing technical methods that support the fairness principles outlined in TR 24368. This integration enables both principled approaches to fairness and practical implementation techniques.
Real-World Applications and Case Studies

Healthcare AI Applications
Healthcare organizations are applying ISO/IEC TR 24368 to develop diagnostic AI systems that maintain patient privacy while providing transparent explanations for clinical recommendations. These implementations involve extensive consultation with medical professionals, patients, and bioethicists to ensure systems reflect healthcare values and professional standards.
Key implementation features include:
- Explainable diagnoses that clinicians can understand and validate
- Bias monitoring across different patient populations
- Privacy protection using advanced anonymization techniques
- Continuous validation against clinical outcomes
Financial Services
Banks and financial institutions use ISO/IEC TR 24368 frameworks to develop fair lending algorithms that avoid discriminatory outcomes while maintaining predictive accuracy. These systems undergo regular auditing to ensure consistent performance across demographic groups.
Implementation elements include:
- Algorithmic transparency in credit decision processes
- Demographic impact testing to identify potential bias
- Appeals processes for contested decisions
- Stakeholder engagement with consumer advocacy groups
Autonomous Systems
Manufacturers of autonomous vehicles and robotics systems apply TR 24368 principles to address ethical questions around decision-making in complex scenarios. These applications require extensive scenario testing and stakeholder input to develop appropriate behavioral frameworks.
Competitive Advantages of Ethical AI
Organizations implementing ISO/IEC TR 24368 frameworks realize measurable business benefits beyond regulatory compliance:
Trust and Brand Value: Companies demonstrating commitment to ethical AI practices build stronger customer relationships and brand reputation, translating to increased market share and customer loyalty.
Risk Mitigation: Proactive ethical frameworks reduce exposure to regulatory penalties, legal liability, and reputational damage from AI-related incidents.
Operational Efficiency: Structured ethical processes improve development efficiency by identifying potential issues early, reducing costly redesign and remediation efforts.
Talent Acquisition: Organizations with strong ethical AI practices attract top talent who prefer working for values-driven companies.
Market Access: Ethical AI certification enables access to markets with strict regulatory requirements, expanding business opportunities.
Practical Guidance for Organizations
Implementation Roadmap
Organizations beginning their ethical AI journey should follow a structured implementation approach:
Phase 1: Assessment and Planning
- Evaluate current AI systems and development processes
- Identify ethical risks and compliance requirements
- Establish governance structures and assign responsibilities
Phase 2: Framework Development
- Adapt ISO/IEC TR 24368 principles to organizational context
- Develop policies, procedures, and evaluation criteria
- Train teams on ethical AI principles and practices
Phase 3: Implementation and Integration
- Apply ethical frameworks to AI development projects
- Implement monitoring and evaluation systems
- Establish stakeholder engagement processes
Phase 4: Continuous Improvement
- Monitor system performance and ethical outcomes
- Refine processes based on experience and feedback
- Adapt to evolving standards and regulations
Expected Outcomes by 2025
Organizations implementing ISO/IEC TR 24368 can expect significant improvements in their AI governance capabilities by late 2025:
- Reduced regulatory risk through proactive compliance frameworks
- Enhanced stakeholder trust from transparent, accountable AI practices
- Improved system performance through bias mitigation and fairness optimization
- Streamlined development processes with integrated ethical considerations
- Competitive differentiation in markets demanding responsible AI
The World Economic Forum's AI Governance Alliance projects that 87% of global enterprises will implement structured ethical AI frameworks by end of 2025, making ISO/IEC TR 24368 adoption a competitive necessity.
Aligning AI with Societal Values
The ultimate goal of ISO/IEC TR 24368 is ensuring AI technologies contribute positively to human well-being and societal progress. This alignment requires ongoing dialogue between technologists, policymakers, and civil society to ensure AI development reflects diverse perspectives and values.
Organizations should view ethical AI not as a constraint on innovation but as a catalyst for creating more robust, inclusive, and beneficial technologies. By embedding ethical considerations throughout the AI lifecycle, companies can develop systems that earn trust and serve human needs effectively.
Frequently Asked Questions
Why is ISO/IEC TR 24368 important?
ISO/IEC TR 24368 provides the first internationally recognized framework for addressing ethical and societal concerns in AI development. It helps organizations proactively identify and mitigate ethical risks while building trust with stakeholders and ensuring regulatory compliance across global markets.
Does this report prescribe specific ethical values?
No, ISO/IEC TR 24368 adopts a values-neutral approach that allows organizations to adapt the framework to their cultural contexts and value systems. Rather than prescribing specific values, it provides principles and processes for addressing ethical considerations systematically.
How does ISO/IEC TR 24368 relate to other AI standards?
ISO/IEC TR 24368 complements other AI standards like ISO/IEC 23894 (AI risk management), ISO/IEC 42001 (AI management systems), and IEC 22989 (AI terminology). Together, these standards create a comprehensive framework for responsible AI governance addressing both technical and ethical aspects.
Who Should Use This Standard?
ISO/IEC TR 24368 is designed for AI developers, organizations deploying AI systems, policymakers, and any stakeholder involved in AI governance. It's particularly valuable for companies seeking structured approaches to ethical AI development and regulatory compliance.
Transform Your AI Strategy with Ethical Excellence
ISO/IEC TR 24368 represents more than compliance—it's a pathway to building AI systems that earn trust, drive innovation, and create positive societal impact. Organizations implementing these frameworks position themselves as leaders in responsible AI development while reducing risks and enhancing competitive advantage.
Ready to elevate your AI governance with ISO/IEC TR 24368? Our experts can guide you through implementation strategies tailored to your organization's needs. Discover how ethical AI frameworks can strengthen your competitive position while ensuring your technologies serve humanity's best interests.
Contact our AI compliance specialists to begin your journey toward trustworthy AI that drives business success while respecting human values and societal needs.
Lorem ipsum dolor sit amet
Lorem Ipsum Dolor Sit Amet
Lorem ipsum odor amet, consectetuer adipiscing elit. Elementum condimentum lectus potenti eu duis magna natoque. Vivamus taciti dictumst habitasse egestas tincidunt. In vitae sollicitudin imperdiet dictumst magna.

Lorem Ipsum Dolor Sit Amet
Lorem ipsum odor amet, consectetuer adipiscing elit. Elementum condimentum lectus potenti eu duis magna natoque. Vivamus taciti dictumst habitasse egestas tincidunt. In vitae sollicitudin imperdiet dictumst magna.

Lorem Ipsum Dolor Sit Amet
Lorem ipsum odor amet, consectetuer adipiscing elit. Elementum condimentum lectus potenti eu duis magna natoque. Vivamus taciti dictumst habitasse egestas tincidunt. In vitae sollicitudin imperdiet dictumst magna.

Lorem Ipsum Dolor Sit Amet
ISO/IEC Certification Support
Drive innovation and build trust in your AI systems with ISO/IEC certifications. Nemko Digital supports your certification goals across ISO/IEC frameworks, including ISO 42001, to help you scale AI responsibly and effectively.
Contact Us