Agentic AI Is Changing The Security Model For Enterprise Systems

Agentic AI Is Changing The Security Model For Enterprise Systems
Source: Forbes

Agentic AI challenges a core assumption embedded in most enterprise security systems: that humans make the decisions. As organizations begin deploying AI agents capable of interpreting intent and taking action across systems, security models built around human approval no longer fully apply. Recognizing the implications, the National Institute of Standards and Technology recently launched the AI Agent Standards Initiative and issued a Request for Information, due March 9, to better understand the security risks introduced by agentic AI.

Unlike traditional automation, which executes predefined workflows, agentic systems interpret intent, choose actions and interact with tools and data in real time. They don't just generate output; they act. They can query systems, change code, and trigger workflows. Agentic AI shows promise in tasks as varied as reviewing and paying invoices, reviewing and restocking inventory, booking travel and maintaining physical infrastructure. Yet the same characteristics that make agentic AI valuable can also introduce risk, including financial loss, data exposure, or safety concerns if systems are misused or poorly governed.

As organizations experiment with autonomous systems, questions around agentic AI security and oversight are becoming central to enterprise risk management.

Agentic AI Changes the Security Landscape

Security leaders say the shift to agentic systems fundamentally changes how organizations must think about risk and oversight. Shahar Peled is co-founder and CEO of Terra Security, a penetration testing company that deploys human-in-the-loop agentic AI. He describes the shift in blunt terms: "Agentic AI changes the security equation in three fundamental ways: speed, scale, and autonomy."

The vulnerabilities themselves aren't new. Data leakage, privilege escalation and insecure integrations remain familiar risks. What's changed is velocity. Agents can operate continuously, adapt in real time and coordinate with other agents. That opens the door to incidents like agent-to-agent attacks, interaction between previously separated systems and unauthorized privilege expansion across workflows.

Hanah-Marie Darley, co-founder and chief AI officer of agentic AI security company Geordie AI, argues that "the biggest threat isn't rogue AI. It's a loss of visibility." Organizations often cannot reconstruct who asked an agent to perform a task, how the agent interpreted the request, or which tools and data it used to complete it. Without that chain of events, accountability becomes difficult to establish.

Most enterprise security frameworks assume human approval at critical junctures. With AI agents, that assumption is no longer valid. If software is making decisions independently, responsibility shifts upstream to system design, configuration and oversight. As Peled puts it, "'The system did it' is not an acceptable answer if the system was never properly bound, monitored, or auditable."

Agentic AI Requires Greater Operational Discipline

Ken Johnson, co-founder and CTO of agentic code security company DryRun Security, summarizes the core tension: "autonomy plus authority creates behavioral risk, not just code risk." AI Agents can act faster than humans can reason about the consequences.

The best implementations, he notes, deliberately constrain authority. Coding agents, for example, can be authorized to create code changes but not merge them. Security agents can explain why a change is risky and require human approval before production deployment. Incident response agents can gather context but require approval to act.

Scale without oversight introduces tremendous risk. "Autonomous agents with production access, no kill switch and no audit trail can introduce silent, systemic security failures," Johnson said. In software development environments, Johnson has repeatedly seen "AI coding agents start strong, then steadily degrade as the codebase grows, effectively drifting from established patterns, violating standards, and introducing subtle but dangerous flaws." At scale, code changes can be large and move so fast that changes are in production before humans realize something is wrong.

Peled describes the nightmare scenario as "silent autonomy at scale: agents with broad access, no clear owner, weak audit trails and no escalation path operating across interconnected systems." This environment allows failures to compound quietly. "By the time humans notice, the damage is already done," he added.

Applying Secure by Design to Agentic AI

Security leaders increasingly emphasize that agentic systems must be designed with safeguards built in from the start. The Cybersecurity and Infrastructure Security Agency (CISA) has urged technology providers to adopt a "secure by design" approach, prioritizing the security of customers as a core business requirement rather than treating it as a secondary technical feature.

The Cybersecurity and Infrastructure Security Agency urges technology providers to ensure their products are secure by design, meaning that they "prioritize the security of customers as a core business requirement, rather than merely treating it as a technical feature."

Across interviews, several foundational controls consistently emerged as essential for deploying agentic systems responsibly:

  • Clear authority limits that determine what the agent is allowed to do and where, and enforce least privilege
  • Immutable, transparent and separate audit logs that track intent, interpretation, input and execution
  • Defined human escalation points when risk thresholds are crossed
  • Behavioral observability that detects drift and shows not just what happened, but why

Experts disagree on kill switches. While Johnson promotes kill switches that provide instant, global shutdown, Darley advises cautions. "A kill switch sounds comforting," she says, "but abruptly terminating agents mid-task often creates more risk because cutting off processes mid-task can destroy auditability and trigger cascading failures." Instead, she argues for guardrails embedded through system prompts and contextual governance that authorizes permissions that vary by user, location and scenario rather than static rules.

The Governance Challenge of Agentic AI

While many leaders focus on whether and how to deploy agentic systems, a more fundamental question is operational ownership. Clear stewardship and governance are necessary for accountability and security. Leaders approving first deployments should insist on clear definitions and authorities to prevent unmanaged exposure created by unconstrained (or unknown) autonomy.

Darley warns of "agent sprawl - a fragmented mix of SaaS-embedded agents, custom code and citizen-built agents created by non-technical teams." A unified, strategic view is necessary not only for security but to maximize impact.

Ultimately, the rise of agentic AI is as much a governance challenge as a technological one. Leaders must be able to answer where escalation occurs, who has authority to intervene, how decisions are logged, and how agent behavior is monitored over time. Without those structures in place, organizations risk introducing autonomy faster than they can manage it.