What new challenges does AI present for cybersecurity?
The increasingly rapid evolution and widespread adoption of artificial intelligence (AI) technologies poses new problems for cybersecurity. While AI systems inherit traditional cybersecurity vulnerabilities, they also introduce new risks due to their reliance on large datasets, complex algorithms, and interconnected components. Key challenges include adversarial machine learning attacks, data poisoning, model inversion, and backdoor exploitation.
AI-Specific vulnerabilities
AI systems differ fundamentally from traditional software in their reliance on dynamic data and probabilistic outcomes. This makes them susceptible to attacks which target their learning processes and outputs. Common vulnerabilities include:
- Adversarial attacks. The deliberate manipulation of input data is used to deceive AI models.
- Data poisoning. The introduction of malicious data during training corrupts the model.
- Model stealing and inversion. The unauthorized extraction of model architecture or training data creates issues for model regulation.
- Embedded backdoors. Hidden triggers within models are exploited to alter behavior.
Evolving standards and regulations
The European Union’s AI Act (2024) underscores the importance of securing high-risk AI systems. Article 15 outlines mandatory cybersecurity requirements that integrate risk assessments and robust protective measures into every high-risk AI use. The Act mandates an integrated, risk-based approach to cybersecurity, combining established practices with AI-specific controls.
The development of international standards for AI systems plays a pivotal role in building a cybersecurity approach for AI. Efforts such as the ISO/IEC 27000 series on information security management (See ISO/IEC 27001, ISO/IEC 27701, ISO/IEC 27563) and emerging AI-specific standards, like ISO/IEC CD 27090 (Cybersecurity – Artificial Intelligence), are crucial steps toward standardized AI security measures.
Integrated and continuous approaches
To address these challenges, securing AI systems requires:
- Holistic risk assessment: Cybersecurity measures must consider the interactions between AI components and their broader systems.
- Security-in-depth: Layered security controls should protect systems at every level of the system’s process, from data preprocessing to final outputs.
- Lifecycle security: Security measures must be continuously monitored and updated throughout the AI system's lifecycle.
Global collaboration for a safer future
As previously noted, emerging AI technologies pose additional difficulties as their vulnerabilities often exceed current security practices. For instance, large-scale deep learning models may require novel defenses not yet addressed by existing frameworks. The AI Act acknowledges these limitations and emphasizes the need for ongoing research and innovation.
The cruciality of addressings AI cybersecurity risk thus extends beyond regulatory compliance. International collaboration, research into adversarial machine learning, and the development of AI-specific security controls are essential to staying ahead of malicious actors. Initiatives like threat modeling for AI and adversarial robustness metrics represent promising areas of development.
A globally aligned approach to AI cybersecurity, using the AI act and ISO standards as benchmarks for security, can pave the way for a faster development of these much-needed AI-specific frameworks. By prioritizing in-depth security in high-risk systems, the AI community can ensure that new technologies remain not just secure, but also trustworthy and future-oriented.
