Skip to content
Mónica Fernández PeñalverJanuary 27, 20252 min read

Cybersecurity Landscape in the Field of AI

What new challenges does AI present for cybersecurity? 

The increasingly rapid evolution and widespread adoption of artificial intelligence (AI) technologies poses new problems for cybersecurity. While AI systems inherit traditional cybersecurity vulnerabilities, they also introduce new risks due to their reliance on large datasets, complex algorithms, and interconnected components. Key challenges include adversarial machine learning attacks, data poisoning, model inversion, and backdoor exploitation.  

AI-Specific vulnerabilities  

AI systems differ fundamentally from traditional software in their reliance on dynamic data and probabilistic outcomes. This makes them susceptible to attacks which target their learning processes and outputs. Common vulnerabilities include:  

  • Adversarial attacks. The deliberate manipulation of input data is used to deceive AI models. 
  • Data poisoning. The introduction of malicious data during training corrupts the model. 
  • Model stealing and inversion. The unauthorized extraction of model architecture or training data creates issues for model regulation.  
  • Embedded backdoors. Hidden triggers within models are exploited to alter behavior.

 Evolving standards and regulations  

The European Union’s AI Act (2024) underscores the importance of securing high-risk AI systems. Article 15 outlines mandatory cybersecurity requirements that integrate risk assessments and robust protective measures into every high-risk AI use. The Act mandates an integrated, risk-based approach to cybersecurity, combining established practices with AI-specific controls.  

The development of international standards for AI systems plays a pivotal role in building a cybersecurity approach for AI. Efforts such as the ISO/IEC 27000 series on information security management (See ISO/IEC 27001, ISO/IEC 27701, ISO/IEC 27563) and emerging AI-specific standards, like ISO/IEC CD 27090 (Cybersecurity – Artificial Intelligence), are crucial steps toward standardized AI security measures.  

Integrated and continuous approaches  

To address these challenges, securing AI systems requires:  

  • Holistic risk assessment: Cybersecurity measures must consider the interactions between AI components and their broader systems. 
  • Security-in-depth: Layered security controls should protect systems at every level of the system’s process, from data preprocessing to final outputs. 
  • Lifecycle security: Security measures must be continuously monitored and updated throughout the AI system's lifecycle.   

Global collaboration for a safer future 

 As previously noted, emerging AI technologies pose additional difficulties as their vulnerabilities often exceed current security practices. For instance, large-scale deep learning models may require novel defenses not yet addressed by existing frameworks. The AI Act acknowledges these limitations and emphasizes the need for ongoing research and innovation.  

 The cruciality of addressings AI cybersecurity risk thus extends beyond regulatory compliance. International collaboration, research into adversarial machine learning, and the development of AI-specific security controls are essential to staying ahead of malicious actors. Initiatives like threat modeling for AI and adversarial robustness metrics represent promising areas of development.  

A globally aligned approach to AI cybersecurity, using the AI act and ISO standards as benchmarks for security, can pave the way for a faster development of these much-needed AI-specific frameworks. By prioritizing in-depth security in high-risk systems, the AI community can ensure that new technologies remain not just secure, but also trustworthy and future-oriented.  

avatar

Mónica Fernández Peñalver

Mónica has actively been involved in projects that advocate for and advance Responsible AI through research, education, and policy. Before joining Nemko, she dedicated herself to exploring the ethical, legal, and social challenges of AI fairness for the detection and mitigation of bias. She holds a master’s degree in Artificial Intelligence from Radboud University and a bachelor’s degree in Neuroscience from the University of Edinburgh.

Fundamentals of AI and AI Policy

This foundational block provides essential principles and practices crucial for understanding the basics of AI, AI governance, ethics, regulations, and standards. How does AI work, and how will it be regulated?
2 (1)

RELATED ARTICLES