
ISO/IEC TR 5469:2024
A standard for functional safety within AI systems
Learn how ISO/IEC TR 5469:2024 merges AI with functional safety engineering. Discover three core applications, risk mitigation strategies, and integration tips for AI lifecycles. Essential for organizations aiming for safe AI deployment.
ISO/IEC TR 5469:2024 addresses a critical challenge in modern technology: how to safely integrate artificial intelligence into systems where failure could cause harm to people, property, or the environment. Published in January 2024, this technical report bridges the gap between traditional functional safety engineering and the emerging complexities of AI deployment.
Understanding the Scope: Three Core Applications
The technical report explicitly covers three distinct scenarios for AI and functional safety integration:
• AI Within Safety Functions: Using AI components to directly implement safety-critical functionality, illustrating the role of AI subsystems in safety.
• Safety Controls for AI Equipment: Employing traditional (non-AI) safety mechanisms to ensure AI-controlled systems remain safe.
• AI for Safety Development: Leveraging AI tools to design and develop traditional safety-related functions.
This tri-fold approach recognizes that AI's role in safety systems isn't monolithic—different applications require different risk management strategies.
Technical Report vs. Standard: What This Means
ISO/IEC TR 5469:2024 is designated as a Technical Report (TR), not a normative standard. This classification is significant:
• Informative Guidance: Provides methods, techniques, and best practices rather than mandatory requirements, essential for new projects.
• Industry Learning: Captures current knowledge while the field evolves.
• Foundation for Future Standards: Serves as groundwork for the forthcoming ISO/IEC TS 22440, which will establish formal requirements.
Organizations can use this report to inform their AI safety strategies, but compliance isn't mandatory or certifiable under this document alone.
Key Technical Components
The report structures AI safety considerations around the three-stage realization principle defined in ISO/IEC 22989:2022:
Data Acquisition Stage
- Input validation mechanisms
- Sensor reliability requirements
- Data quality assurance protocols
Knowledge Induction Stage
- Training data verification
- Model validation techniques
- Human knowledge integration methods, reflecting the functional safety lifecycle.
Processing and Output Stage
- Inference reliability measures
- Output safety constraints
- Performance monitoring requirements
This framework parallels traditional functional safety's sensor-controller-actuator model, making it accessible to safety engineers familiar with standards like IEC 61508, ISO 26262, and IEC 62061.
Practical Mitigation Strategies
The report outlines three primary risk mitigation approaches for AI embedded in products:
• Backup Systems: Implementing non-AI fallback functions that activate when AI components fail or produce uncertain outputs.
• Supervisory Controls: Creating boundary systems that constrain AI outputs within safe operational limits.
• Redundant AI Voting: Deploying multiple AI models with voting mechanisms to identify and reject anomalous outputs.
These strategies acknowledge that AI systems often can't achieve the deterministic behavior required by traditional safety standards.
Integration with AI Lifecycle Management

A critical aspect of the report is its emphasis on aligning functional safety lifecycles with AI lifecycle processes. The document references:
• ISO/IEC 5338 for AI system lifecycle processes.
• IEC 61508 for functional safety lifecycle alignment.
• ISO/IEC 42001:2023 for AI management system requirements, emphasizing patents and conformity assessment types.
This integrated approach ensures safety considerations are embedded throughout AI development, deployment, and maintenance phases.
Verification and Validation Framework
The report dedicates substantial attention to V&V activities specific to AI-safety integration:
Verification Methods (Clause 9)
- Statistical performance validation
- Robustness testing against adversarial inputs
- Explainability assessment for safety-critical decisions
Control Measures (Clause 10)
- Runtime monitoring systems
- Graceful degradation protocols
- Human oversight requirements
Process Methodologies (Clause 11)
- Safety case development for AI components
- Risk assessment adaptations for non-deterministic systems
- Documentation requirements for AI training and validation
Industry Applications and Limitations
While ISO/IEC TR 5469:2024 is industry-agnostic, its application varies by sector:
Well-Suited Applications:
- Automotive advanced driver assistance systems
- Industrial automation with AI-enhanced controls
- Medical devices incorporating AI diagnostics
- Robotics with machine learning capabilities
Current Limitations:
- Lacks specific quantitative safety integrity level (SIL) mappings for AI
- Doesn't address certification pathways for AI safety
- Provides limited guidance on continuous learning systems
Relationship to Global AI Standards
This technical report represents one component of a broader AI standards ecosystem, reflecting a process perspective. According to the International Organization for Standardization, it complements:
- ISO/IEC 23053: Framework for AI systems using machine learning.
- ISO/IEC 23894: AI risk management.
- ISO/IEC 24668: Process reference model for AI systems.
The International Electrotechnical Commission positions this work within its broader functional safety framework, bridging traditional safety engineering with AI innovation.
Implementation Considerations for Organizations
Organizations considering this technical report should:
• Assess Current Safety Processes: Evaluate how existing functional safety procedures can accommodate AI uncertainty.
• Develop AI Safety Competencies: Train safety engineers in AI concepts and risks.
• Document AI Design Decisions: Create traceable records linking AI choices to safety requirements as a first important step.
• Establish Monitoring Systems: Implement continuous performance tracking for AI safety functions.
Future Developments
The technical report explicitly serves as input for ISO/IEC TS 22440, currently under development. This forthcoming Technical Specification will transform the guidance into more prescriptive requirements, potentially including:
- Quantitative reliability metrics for AI safety functions.
- Certification frameworks for AI-integrated safety systems.
- Detailed test procedures for AI safety validation.
According to research from the IEEE Reliability Society, the progression from TR to TS to eventual International Standard typically spans 3-5 years, suggesting formal AI safety standards may emerge by 2027-2029.
Taking Action: Next Steps for Safety Professionals
ISO/IEC TR 5469:2024 provides essential guidance for organizations navigating AI integration in safety-critical applications. While not mandating specific requirements, it establishes a framework for systematic risk assessment and mitigation.
Safety professionals should view this technical report as a roadmap for responsible AI deployment in safety systems. Begin by mapping your current AI applications against the three scenarios outlined, identifying gaps in your safety assessment processes, and implementing the verification and validation practices detailed in the report.
As AI continues to transform safety-critical industries, this technical report offers a crucial bridge between innovation and established safety principles—ensuring technological advancement doesn't compromise the fundamental goal of protecting human life and well-being.
ISO/IEC Certification Support
Drive innovation and build trust in your AI systems with ISO/IEC certifications. Nemko Digital supports your certification goals across ISO/IEC frameworks, including ISO 42001, to help you scale AI responsibly and effectively.
Contact UsLorem ipsum dolor sit amet
Lorem Ipsum Dolor Sit Amet
Lorem ipsum odor amet, consectetuer adipiscing elit. Elementum condimentum lectus potenti eu duis magna natoque. Vivamus taciti dictumst habitasse egestas tincidunt. In vitae sollicitudin imperdiet dictumst magna.

Lorem Ipsum Dolor Sit Amet
Lorem ipsum odor amet, consectetuer adipiscing elit. Elementum condimentum lectus potenti eu duis magna natoque. Vivamus taciti dictumst habitasse egestas tincidunt. In vitae sollicitudin imperdiet dictumst magna.

Lorem Ipsum Dolor Sit Amet
Lorem ipsum odor amet, consectetuer adipiscing elit. Elementum condimentum lectus potenti eu duis magna natoque. Vivamus taciti dictumst habitasse egestas tincidunt. In vitae sollicitudin imperdiet dictumst magna.
