AI Agents: Augmenting Vulnerability Analysis and Remediation
2025-06-21 , Track 1 (UC Conf. Rm. A) (2nd Floor)

This talk will explore the tangible impact of LLMs in cybersecurity, focusing on how they can be used to automate proactive security workflows at scale using agentic patterns.

We’ll analyze real world case studies to show where AI agents excel and where they fall short. Specifically, we'll discuss how AI agents can be used to augment traditional human-driven processes to expedite vulnerability identification, assessment, and remediation.


The AI hype cycle is in full swing, making it increasingly difficult to separate reality from marketing. To help cut through the noise, we'll walkthrough real world examples of using agentic workflows to augment vulnerability analysis and remediation to show where they excel and where they fall short.

Topics We'll Cover:
1. AI Agent Architecture Overview
The LLM vertical continues to change at incredible speeds which makes integrating LLMs difficult. Which model should we use for certain workloads? Which agentic framework is "best"? How should we architect AI agents to be easily modifiable when a new and improved LLM model are released? We'll answer these questions and propose solutions.

  1. Using LLMs to Augment CVE Analysis
    AI isn't replacing cybersecurity professionals anytime soon, but they can make you more efficient by automating certain tasks. We’ll discuss automating specific security workflows with AI agents, including:

Triage: Given a result from a vulnerability scanner or tool, how can an AI agent help determine if it’s a False Positive or a True Positive?

Analysis: Is there public proof of concept code related to this CVE? We’ll review how LLMs with Retrieval Augmented Generation ("RAG") can be used as a super-powered Google to identify and analyze information you need to quickly make a security decision.

Remediation: We’ll discuss how to build RAG systems to integrate CMDB and patch management tools to build AI agents that generate customized remediation plans.

  1. Challenges in AI-Driven Security Operations
    While AI has already transformed industries like customer support and sales, cybersecurity presents unique challenges that make teams hesitant to fully trust AI-driven analysis. Unlike other fields where minor AI mistakes may be tolerable, cybersecurity has limited room for error—a misstep can lead to a catastrophic outage or major security breach.

Strategically speaking, how do we ensure that AI-driven security decisions remain explainable, verifiable, and reliable? Technically speaking, how can we limit LLM hallucinations to build trust in scenarios where even a small error could have serious consequences?

We’ll discuss strategies to minimize hallucinations using RAG to ground LLM outputs in authoritative sources and implementing strict validation layers before acting on AI-generated recommendations.

We’ll also discuss human-in-the-loop systems that allow security analysts to validate AI outputs before execution, leveraging confidence scoring techniques to clarify why AI reached a certain conclusion, and building audit trails to ensure AI-driven decisions can be reviewed and justified.

Peyton has spent 10+ years in cyber security with an emphasis in Red Team, Incident Response, and Threat Intelligence. He was a member of CrowdStrike Services from 2018 - 2023 where he split time between Incident Response and Red Team. He was a first responder to many of the most sophisticated nation state and e-crime cyber intrusions in the world. He also performed numerous red team exercises across a range of industry verticals and breached 20+ Fortune 1000 organizations.

Today, Peyton is the founder and CEO of Specular where he's focused on combining cyber security and AI to augment the identification, assessment, and remediation of cyber security vulnerabilities and misconfigurations.