Back to stories

AI for Security: Eight Areas of Opportunity

February 8, 2024

Generative AI is a powerful technology in the hands of both good and bad actors. While cybercriminals can use GenAI to complicate and expand existing threats, it’s also an incredible defensive technology.  

As Menlo Ventures seeks to invest in the sector, we’ve  identified eight areas where generative AI will have an outsized impact:

  1. Vendor risk management and compliance automation
  2. Security training
  3. Penetration testing (“pen testing”)
  4. Anomalous detection and prevention
  5. Synthetic content detection and verification
  6. Code review
  7. Dependency management
  8. Defense automation and SOAR capabilities 

1. Vendor risk management and compliance automation tools

As companies increasingly communicate, collaborate, and integrate with third-party vendors and customers, cybersecurity is no longer about protecting your environment—operators must ensure that their entire third-party application stack is secure.

However, today’s vendor security processes are time-intensive and manual, vulnerable to human error. We believe GenAI will automate and strengthen this process. We’re excited by companies like Dialect, which provides an AI assistant that auto-fills responses to security questionnaires (among other types of questionnaires) based on your existing data for fast and accurate responses. 

2. Security training

With any new technology, proper training is paramount. While security training has existed for some time, companies are still vulnerable to breaches. After all, you’re only as strong as your weakest link. In September, casinos MGM and Caesars experienced data leaks due to targeted social engineering attacks. 

We expect GenAI will be used to develop tailored, engaging, and dynamic training content for employees that more accurately replicates real-world scenarios and risks. For example, Immersive Labs* uses GenAI to create attack scenarios and incidents in training simulations for their security team. Riot provides a security co-pilot that leads employees through chat-based, interactive security awareness training through Slack or the web.

3. Penetration testing

With GenAI transforming the offense, penetration testing must similarly adapt to mimic these attacks properly. GenAI can improve many parts of the pen testing process:

  • Searching public databases and/or private data stores for the latest known criminal characteristics
  • Scanning customers’ IT environments
  • Identifying and exploring potential exploits
  • Suggesting remediation steps
  • Summarizing findings in an auto-generated report 

We’re excited about players in the space using GenAI to revolutionize PTaaS. Run Sybil aims to automate “hacker intuition” and offers active defense capabilities in response. BugBase’s Pentest Copilot offers an “ethical hacking assistant”—an LLM agent that sniffs out the latest malware methods—scanning a target’s environment, detecting potential exploits, and suggesting remediations.  

4. Anomalous detection and prevention

We think GenAI will vastly improve anomalous detection and prevention capabilities. Event logs can be monitored by agents fine-tuned to flag unusual events as possible attacks. These agents can be deployed across susceptible telemetry points, such as endpoints, networks, APIs, and data repositories. 

“GenAI can tremendously help with detection and response capabilities, but their false-positive rate is still very high. Assuming it gets better over time, it’s a tremendous opportunity. Humans can’t process all the signals.”

—CISO, Enterprise Media Company

Already, we’re seeing strong outcomes arise out of our portfolio companies. For example, Abnormal Security’s* AbnormalGPT fine-tunes LLMs with their internally labeled data sets to better distinguish attacks vs. spam vs. safe emails and can even detect when emails are AI-generated. Abnormal will roll out LLM-based classifiers for email attachments and non-email-based attacks. 

5. Synthetic content detection and verification

Cybercriminals weaponize GenAI to create believable, high-fidelity digital identities capable of bypassing existing fraud detection systems, including ID verification software, document verification software, and manual reviews. Bad actors also use synthetic data to create companies in larger-scale applications. The results are damaging; cybercriminals can take out lines of credit, make large purchases, or leak sensitive information. The FTC prices the average cost of a single fraud event at upwards of $15,000. Survey results from Wakefield and Deduce reveal the scale of the problem—76% of companies believe they have extended credit to synthetic customers, and this problem is only growing; the amount of fraud involving AI-generated identities grew 17% over the last two years. 

It’s impressive to see how players are tackling the issue of synthetic content with next-gen verification. For instance, Deduce built a multi-context, activity-backed identity graph of 840 million identified U.S. profiles to baseline authentic behavior and identify potentially malicious actors. Meanwhile, DeepTrust built a toolkit of API-accessible detection models across various media—detecting voice clones, verifying articles and transcripts, and identifying synthetic images or videos. 

6. Code review

In software development, the “shift left” approach emphasizes moving testing activities earlier in the development process to improve software quality, test coverage, and time to market. To shift left effectively, companies must be able to scan their code and identify exploits quickly. Too often, automated security scans and SAST tools fall short: SAST frameworks are generic and hard to customize, resulting in high false-positive rates. Writing and validating custom rules are time-consuming and difficult to maintain. Ultimately, we’re left with a constant backlog of vulnerabilities, dissatisfied developers, and slower production releases.   

Startups in the space are tackling the issue head-on. Semgrep provides customizable rules that help security engineers and developers identify vulnerabilities and suggests specific remedial actions tailored to their organization’s needs. Mobb.ai integrates with existing SAST solutions and analyzes their feed to provide recommended fixes; developers can then approve and commit the code directly. 

7. Dependency management

Software dependencies are pervasive. Synopsys’ 2023 OSSRA Report notes that 96% of codebases contain open-source code, and projects often involve hundreds of third-party vendors. A large chunk of a company’s codebase is vulnerable to external dependencies, which are harder to control than internally written code. Unpatched vulnerabilities in the company’s codebase can be detrimental, as they are tedious to trace and isolate. 

Socket works to secure supply chains by proactively detecting and blocking over 70 signals of supply chain risk in open-source code, detecting suspicious package updates and building a security feedback loop back to the dev process.

8. Defense automation and SOAR capabilities

Over at the security operations center (SOC), security teams face endless logs of alerts, and analysts must painstakingly investigate to determine whether or not the threat is material. False positives are time- and resource-consuming, and false negatives can leave harmful data breaches undetected. Companies like Dropzone AI provide a pre-trained AI agent to autonomously investigate alerts across security tools/data stacks, using a security-trained reasoning system to understand context, run tailored investigations, and generate summarizing reports on the incident.

It’s too early to measure GenAI’s impact on cybersecurity, but it will be profound. Evolution is a constant: Attacks will increase in sophistication, cybercrime will adapt and expand, and the need to protect IT environments will remain critical. The team at Menlo is actively investing in cybersecurity companies, specifically those building solutions that leverage GenAI tooling. We’d love to hear from you if you’re building in this space!


We thank the experts who participated in our research panel and shared their valuable insights and expertise.

*Backed by Menlo Ventures