The Security Paradigm Is Shifting Under Our Feet
AI agents1 are breaking the software security model we have trusted for decades. Traditional applications execute deterministic code paths; AI agents do not. They make autonomous decisions, orchestrate multi-step workflows, and interact with external systems without human oversight. Agents reason through problems in real time and take actions based on natural language instructions, creating an entirely new category of privileged users that existing security frameworks were not designed to manage. As a result, securing agentic systems requires rethinking everything from authentication to observability. A new playbook is emerging: security for agents.
At the same time, AI agents are also transforming how security tools are built, ushering in a new generation of companies leveraging agents for security. The cybersecurity labor shortage has been a top-of-mind problem for years: with 3.5 million unfilled positions worldwide, security teams have been stretched increasingly thin. AI agents will increase the data accessed and potential threat volumes by 100x—preventing human teams from scaling adequately. On the flipside, AI agents provide a welcome reprieve from this persistent challenge, enabling smaller security teams to achieve previously impossible coverage and response times while also improving detection accuracy and operational efficiency. Newer companies like Crogl* are building toward an autonomous SOC, while existing security powerhouses like Abnormal AI* are quickly releasing agents to provide superhuman security awareness training. We may still be in the early stages of fully autonomous security agents, but the trajectory is clear: AI systems are becoming more capable of independent security decision-making and response, moving us closer to the promise of fully agentic security infrastructure.
We are witnessing an evolution of the security for AI and AI for security framework that needs to be revisited for the agentic world: security for agents and agents for security. How do we harness agents to revolutionize our defensive capabilities, turning them into autonomous threat hunters and incident responders? How do we secure a new class of software that operates with unprecedented autonomy and system access? Unlike traditional applications with predictable attack surfaces, agents make autonomous decisions about which systems to access and how to use them. When these agents go rogue, they can traverse systems and access sensitive data at machine speed, rapidly penetrating deeper into critical infrastructure than any human attacker could achieve.
Security for AI Agents
Unlike traditional software with established security guardrails and protocols, newer agent architectures lack fundamental protections. AI agents introduce unprecedented security challenges because they possess unique behavioral patterns and access privileges that fundamentally differ from conventional applications:
- Broad, indiscriminate access. AI agents capable of interacting with a wide range of tools and data sources pose non-trivial security challenges: Unlike humans who deliberately choose specific tools, AI agents can autonomously reason through available tools without human oversight, chain multiple tools together in unpredictable ways, and, importantly, access everything available versus what’s minimally necessary. A maliciously crafted prompt or compromised server can result in unauthorized data exfiltration, destructive operations, or other exploitative behaviors.
- Blind instruction following (and actioning). Black-box reasoning makes it impossible to predict or audit tool usage patterns, making post-incident forensics very difficult. Traditional logging and monitoring methods simply aren’t built for the complex chains of reasoning that occur when an AI agent autonomously orchestrates multiple tools. Security researchers have found significant vulnerabilities in MCP server implementations, with concerns about unsafe shell calls that could allow attackers to execute arbitrary code.
- Scale amplification. One compromised agent can affect thousands of operations instantly. Unlike traditional attacks that require manual effort to spread, a single malicious prompt can trigger cascading effects across an organization’s entire infrastructure. The math is sobering: If an agent can invoke 10 tools per minute and each tool can access multiple backend systems, a compromised agent could potentially touch hundreds of systems within an hour.
- Identity attribution. As agents increasingly act on behalf of humans, security teams face a dual identity challenge: Who authorized the action, and who executed it? Traditional identity and access management (IAM) tools were not built to answer this in real time, creating blind spots where a single compromised agent identity can cascade across multiple systems with unclear attribution chains. The result is a new category of identity orchestration challenges that pre-agent enterprise security frameworks struggle to address.
Threats to AI agents are not just theoretical—they are happening. Agent architectures make known, existing attacks more complex as they are weaponized by autonomous agents that can operate without human oversight. At the same time, these agent architectures have also brought about a new class of attacks that exploit the unique characteristics of AI decision-making and execution.

These agentic security threats are becoming very real, very quickly. Security researchers have already observed live attacks where malicious MCP servers hijack AI agents in development tools like Cursor to steal SSH keys. Attackers are manipulating GitHub issues to coerce agents into leaking private repo data. Even major enterprise platforms have disclosed MCP bugs that exposed customer data across accounts, as seen in this recent Asana incident:

AI agents face both evolved versions of traditional attacks and entirely new threat vectors. Command injection becomes exponentially more dangerous when agents can autonomously execute cascading commands across multiple systems. Data exfiltration scales massively when agents can systematically catalog and correlate information across tools without human supervision.
Security experts are already observing AI-powered reconnaissance (finding targets), payload generation (creating attacks), and agent-to-agent attacks (coordinated execution) in production. With 60% of security leaders reporting their organizations aren’t prepared for AI-powered threats, the window for reactive measures has closed.
What Enterprises Are Missing: Opportunities for New Entrants
Most enterprises currently oversimplify security for AI agents, focusing predominantly on safeguarding core AI infrastructure (e.g., LLMs, training data). However, true security must encompass three distinct layers:

The Emerging Market for Security for Agents
The shift toward agentic AI systems has exposed critical gaps in conventional security models, as these systems introduce unprecedented attack vectors through autonomous tool selection and natural language-driven decision-making. This has driven both new entrants and established security vendors to develop agent-specific capabilities that address these novel risks.

The market for AI agent security is rapidly organizing around four critical categories that map to the full lifecycle of secure agent deployment:
1. Build: Traditional application security testing was designed for deterministic code with predictable execution paths and known input/output patterns. Legacy security scanners struggle to analyze systems where the same prompt might trigger completely different tool combinations or access patterns depending on the agent’s reasoning process.
Companies like Semgrep* and Invariant Labs (acquired by Snyk) are extending code analysis capabilities to identify security flaws in agent code and detect dangerous behaviors during development, providing security-focused testing frameworks.
2. Access/Auth: Traditional identity and access management systems assume predictable authentication patterns and static role assignments, but AI agents break this mold. Agents may need different permissions for the same task depending on their reasoning path, require dynamic credential delegation across service combinations, or establish trust relationships that change based on real-time decision-making.
Authentication companies like customer-identity players (Clerk, Descope), identity orchestration platforms (Strata*), and agent-native startups (Arcade, Composio) are well positioned to build systems that can securely manage agent identities across unpredictable interaction patterns and dynamic service discovery. These solutions focus on linking human and agent identities via unified control planes, ensuring each action maintains clear attribution chains back to human auth. Access management platforms (Astrix*) are also developing specialized agent directories and policy-driven access control planes (Pomerium) that adapts permissions based on real-time context rather than static credentials.
3. Run: Traditional runtime security tools monitor (and intervene in) predictable traffic patterns and known attack signatures within established network perimeters, but AI agents generate highly variable, context-driven request patterns that make conventional security detection nearly impossible. The same agent might legitimately access dozens of different services in unpredictable sequences, making it difficult to distinguish between normal exploration and sophisticated attacks.
Gateway companies like Cequence* and ProtectAI and open-source solutions like MCP Guardian deploy AI-native security controls that can analyze agent intent and decision-making processes in real time, rather than relying on predetermined traffic patterns. Sandboxing companies like MCP Run create isolated execution environments that prevent data exfiltration by restricting agents to explicitly granted resources.
4. Observe and Govern: Typical monitoring and governance systems were built around deterministic app behaviors and static security perimeters, but this creates unprecedented governance challenges: How do you write security policies when you can’t predict which tools an agent will use? How do you detect anomalous behavior when “normal” agent behavior is inherently variable?
Agent observability platforms must build governance frameworks specifically designed for non-deterministic systems. Companies like Astrix*, Obsidian*, Relyance AI*, and Zenity develop platforms that track every touchpoint across an agent’s journey—from spanning unified data graphs and organizational topologies (Relyance AI*) to non-human identity access patterns (Astrix*) to agent behavior analytics (Obsidian*, Zenity). These systems maintain comprehensive maps of agent decision trees and implement dynamic governance that can reason about agent intent and risk in real time rather than relying on static security perimeters.
Evals and red-teaming companies build specialized testing frameworks for non-deterministic agent systems. Lakera and Straiker, for example, pentest agentic workflows—testing attack vectors like tool positioning or agent manipulation, or creating automated attack simulations against AI agents.
What We’re Investing In
At Menlo, we’re actively investing in startups building the security infrastructure for an agentic future. We’re looking for founders with deep technical expertise in both AI capabilities and adversarial techniques, who are rapidly iterating on novel security paradigms designed specifically for autonomous systems.
The security paradigm has already shifted. Now we need founders who will secure the next generation of intelligent systems—before adversaries exploit them. If you’re building at the frontier of AI and cybersecurity, let’s talk.
- “AI agents” can be an ambiguous term. We define AI agents as applications that autonomously reason and execute on decisions. They make independent choices, orchestrate multi-step workflows, and interact with external systems without human oversight. In their ability to reason and execute, they effectively become AI employees—but without the security guardrails we’ve built for human workers. ↩︎
*Backed by Menlo Ventures
Venky is a partner at Menlo Ventures focused on investments in both the consumer and enterprise sectors. He currently serves on the boards of Abnormal Security, Aisera, Appdome, BitSight, Breather, Dedrone, MealPal, Rover, Sonrai Security, StackRox, and Unravel Data. Prior to joining Menlo, he was a managing partner at Globespan…
Rama is a partner at Menlo Ventures, focused on investments in cybersecurity, AI, and cloud infrastructure. He is passionate about partnering with founders to build the next generation of cybersecurity, infrastructure, and observability companies for the new AI stack. Rama joined Menlo after 15 years at Norwest Venture Partners, where…
As a principal at Menlo Ventures, Feyza focuses on soon-to-break-out cybersecurity, supply chain, and enterprise SaaS companies. Prior to Menlo, Feyza worked as an engineer at various companies in different growth stages, including Nucleus Scientific (now Indigotech), Fitbit, and Analog Devices. Her exposure to a wide range of tasks and…
As an investor at Menlo Ventures, Sam focuses on SaaS, AI/ML, and cloud infrastructure opportunities. She is passionate about supporting strong founders with a vision to transform an industry. Sam joined Menlo from the Boston Consulting Group, where she was a core member of the firm’s Principal Investors and Private…