The U.S. National Institute of Standards and Technology (NIST) has released a draft version of its Cybersecurity Framework Profile for Artificial Intelligence—often referred to as a cyber AI profile—providing organizations with crucial guidance on managing the cybersecurity risks associated with AI technologies. This new profile extends the globally recognized NIST framework (the NIST Cybersecurity Framework), offering a structured approach to help organizations secure, defend against, and thwart AI-related cyber threats. The release marks a significant step toward standardizing how organizations approach AI system security, moving beyond governance to address the technical dimensions of AI risk across the full AI lifecycle.
Key Focus Areas of the AI Cybersecurity Framework
The draft profile introduces three core focus areas designed to provide a comprehensive view of the AI cybersecurity framework and the broader landscape of AI security frameworks. These areas provide a structured approach to managing AI-related security challenges and strengthening organizational security postures:
| Secure | Addresses the management of cybersecurity challenges within AI systems themselves, including data, models, and infrastructure—core to building secure AI systems. |
|---|---|
| Defend | Focuses on leveraging AI to enhance an organization’s cyber defense capabilities and broader cybersecurity capabilities, such as threat detection, response, and automation (including the use of AI security agents). |
| Thwart | Is concerned with blocking and responding to AI-powered cyberattacks, which are growing in sophistication, including adversarial threats. |
Barbara Cuthill, one of the profile's authors, noted in a statement, "The three focus areas reflect the fact that AI is entering organizations’ awareness in different ways... But ultimately every organization will have to deal with all three." This multi-faceted approach underscores the complex nature of AI security, where AI can be both a tool for defense and a vector for attack, introducing new cybersecurity challenges and AI-related risks for teams building and operating ML-powered applications.
Integrating with Existing Standards
This new profile builds upon NIST's extensive body of work in AI and cybersecurity. It is designed to be used in conjunction with two foundational documents:
- The foundational NIST Cybersecurity Framework, including its adaptable cybersecurity framework profile approach that helps in defining organizational risk and operational priorities
- The AI Risk Management Framework (AI RMF), including the AI RMF’s four core functions (Govern, Map, Measure, Manage) that support trustworthy AI, transparency, and risk-based decision-making
By mapping AI-specific considerations to the established controls of the Cybersecurity Framework, NIST provides a practical tool for organizations to integrate AI security into their existing risk management programs. In practice, this mapping can act as a comprehensive defense framework, guiding organizations as they assess and improve security controls throughout AI development and deployment, from data sourcing to model monitoring. It also supports collaboration across the entire security community, including internal defenders, third-party security researchers, and ethical hackers who test systems and validate controls.
This aligns with the broader trend of formalizing the cybersecurity landscape in the field of AI, where clear standards are essential for building trust and ensuring the responsible adoption of AI.
The release of the NIST profile is timely, as organizations worldwide grapple with the implications of new regulations and standards governing AI. The guidance complements international standards such as ISO/IEC 42001, the first international standard for AI management systems, and provides a technical cybersecurity layer that aligns with the principles of the EU AI Act. For organizations seeking to navigate this complex regulatory environment, a robust AI cybersecurity framework is no longer optional. A comprehensive approach, as detailed in resources like the ISO 42001 AI Cybersecurity Complete Implementation Guide, is critical for achieving compliance and demonstrating due diligence. Furthermore, the principles outlined in the NIST profile are deeply connected to broader information security practices, such as those defined in ISO/IEC 27001.
Notably, many teams will also compare NIST’s approach with other AI security frameworks emerging from industry initiatives (for example, platform-specific efforts like the Databricks AI security framework) to build a consistent, cross-tool view of controls and risk profiles.
A Strategic Imperative for Business Leaders

For business leaders, the NIST profile serves as a clear signal that AI security must be a core component of any AI strategy. It highlights the need for integrated AI management systems that address not only governance and ethics but also the technical security of AI models and data—covering AI system security from training pipelines to production endpoints. As organizations increasingly rely on AI for critical business functions, the ability to demonstrate that these systems are secure, resilient, and trustworthy will be a key competitive differentiator.
For many organizations, the draft also functions as a practical cybersecurity framework profile template: a way to document current controls, target outcomes, and measurable improvements in security postures over time. This is especially important as teams adopt AI technologies faster than they can standardize controls, and as specialized AI agents and AI security agents are increasingly used to accelerate detection, response, and remediation.
The growing complexity of the AI compliance landscape requires expert guidance, and the adoption of a robust AI cybersecurity framework is a critical first step. In parallel, organizations may lean on an open source framework ecosystem of security tooling (and community-driven benchmarks) to support validation, testing, and repeatable assessments—often in partnership with security researchers and external ethical hackers.
In conclusion, the draft NIST Cybersecurity Framework Profile for Artificial Intelligence provides a much-needed, standardized approach to managing the cybersecurity risks of AI. By offering clear guidance and a practical framework, NIST is empowering organizations to innovate responsibly and build a more secure AI-powered future. As the public comment period for the draft continues, the final version is expected to become an essential resource for any organization serious about securing its AI systems and building secure AI systems that enable trustworthy AI at scale.

