As AI agents move into customer experience workflows, handling queries, processing transactions, and triggering backend actions, they introduce a new layer of security risk that goes beyond traditional chatbots.
The rapid adoption of autonomous AI systems is transforming enterprise operations, but it also introduces unprecedented security challenges. At the recent RSA Conference in San Francisco, Cisco addressed these concerns by launching new security capabilities designed to mitigate AI agent risks. This initiative highlights a critical shift in how organizations must approach AI governance and security, moving from simple access control to comprehensive action control.
The transition from conversational chatbots to action-taking AI agents represents a significant leap in technological capability. Unlike chatbots, which primarily generate text, AI agents can autonomously execute tasks, interact with systems, and make decisions. This autonomy, while driving productivity, requires robust safeguards. According to a recent Cisco survey, while 85 percent of enterprises have experimented with AI agents, only 5 percent have moved them into production, largely due to a persistent trust deficit.
To bridge this gap, organizations must implement stringent security measures. Cisco's new capabilities focus on establishing trusted identities for AI agents, enforcing strict Zero Trust Access controls, and hardening agents before deployment. This approach aligns with the broader need for comprehensive AI governance frameworks that ensure autonomous systems operate safely and transparently.
As AI agents become integrated into the workforce, traditional security paradigms are no longer sufficient. Tom Gillis, Cisco’s Senior Vice President and General Manager for Infrastructure and Security, emphasized the need to move beyond access control to action control. For instance, an AI agent designed to process expense reports needs access to financial systems but must be restricted from making unauthorized purchases.
This conceptual shift requires organizations to rethink their security architectures. Cisco is addressing this by extending Zero Trust Access to hold AI agents accountable to human employees. New capabilities in Duo IAM integrate with model context protocol (MCP) policy enforcement, ensuring that agents are assigned fine-grained permissions specific to their tasks. This level of control is essential for mitigating AI agent risks and preventing costly errors or security breaches.
Cisco's strategy for securing agentic AI involves three core aspects: protecting the enterprise from agents, protecting the agents themselves, and detecting incidents at machine speed. To protect agents from manipulation, Cisco has expanded its AI Defense product with tools for stress-testing agents before deployment. Features like dynamic agent red teaming and application security testing help identify vulnerabilities early in the development lifecycle.
Furthermore, Cisco launched an Agent Runtime Software Development Kit (SDK) that embeds policy enforcement directly into agent workflows. This proactive approach to security is crucial in the evolving cybersecurity landscape in the field of AI, where threat actors are increasingly weaponizing vulnerabilities at rapid speeds.
The integration of AI agents into enterprise workflows offers immense potential for productivity and customer experience enhancements. However, realizing this potential depends entirely on trust. Organizations must be confident that their AI systems will act safely and as intended. Initiatives like Cisco's new security capabilities and the NIST AI Agent Standards Initiative are vital steps toward building this trust. The National Institute of Standards and Technology continues to provide foundational guidance for these efforts.
Nemko Digital helps organizations navigate this complex landscape. With deep expertise in AI standards, risk management, and compliance, Nemko Digital offers advisory services that turn emerging regulatory complexity into strategic clarity. From AI governance frameworks to security assessments, we help ensure that your AI deployments are not only innovative but also trusted, accountable, and ready for what comes next.