Black Hat Europe 2025: Inside the Defenders AI Advantage

AI is reshaping both offense and defense in cybersecurity, but defenders’ deep experience and knowledge gives them the edge against their cyberadversaries

  • Attackers are using AI primarily to enhance tactics including social engineering and recon.
  • Defenders currently hold the advantage with mature AI-driven behavioral analytics and predictive models that enable defenders to stay ahead of the threat.
  • While an AI-powered zero-day apocalypse has not happened yet, all of the ingredients are there for it to become a reality.

AI is no longer a hypothetical in cybersecurity. Yet as we shared at Black Hat Europe 2025 during our session, “AI Unleashed: Witness the Next Generation of Cyber Defense and Offense,” attackers and defenders are using it with vastly different levels of maturity and effectiveness. The discussion, which took place in front of a standing room only crowd, underscored a key reality: while AI-driven attacks are emerging, defenders still have a critical edge.

How adversaries are using AI today

AI has quickly become a powerful tool for threat actors, even if it has not yet delivered the “zero‑day apocalypse” some fear. Rather than inventing entirely new attack classes, adversaries are using AI to craft and accelerate what already works. According to our research, the most common adversary AI use cases include:

  • Technology-assisted social engineering. The most common use case of LLMs today for attackers. AI enables attackers to craft convincing, phishing emails and messages, increasing the likelihood of successful attacks
  • Automated attack mechanics. Attackers are using AI to partially automate their attack chains, using it to author straightforward pieces of code such as PowerShell or batch scripts. 
  • Dataset poisoning. Attackers are experimenting with corrupting training data and using prompt injection to nudge AI systems into exfiltrating data or bypassing safety policies.

During our Black Hat Europe 2025 session, we shared that prompt injection attacks have become a major concern in adversarial AI, manipulating models into actions like data theft or policy evasion. Proof-of-concept malware like PromptLock shows where this is heading: ransomware that drops its own embedded LLM  to write Lua scripts on the fly and autonomously decide on which files to exfiltrate and encrypt.

More worrying is an AI-powered espionage campaign recently discovered by Anthropic. The attackers used Anthropic’s Claude AI to assist with reconnaissance, vulnerability discovery and exploitation, lateral movement, credential theft, and data exfiltration. The attackers managed to automate 80 to 90% of the work involved in this attack campaign. Although the attacks only succeeded against a handful of the 30 organizations targeted, the campaign demonstrates how advanced threat actors may begin leveraging AI in future. Success rates could increase as AI agents become more powerful and attackers develop their knowledge on how best to exploit them and bypass safeguards. 

Why defenders still have the edge

While adversaries have made gains with AI, defenders are much farther ahead. AI’s ability to process massive amounts of data and generate natural language responses has made it an indispensable tool for threat detection, incident response, and vulnerability management. ​ AI-powered systems can analyze complex attack patterns, predict potential threats, and automate responses, enabling security teams to stay ahead of attackers. While AI has only garnered headlines the last few years, it’s not a new technology. In fact, Symantec and Carbon Black together have done almost 30 years of work in machine learning and AI.  

Reducing cognitive load on SOC analysts is one of our primary goals. We do not want analysts that are burned out constantly working thousands a shift. AI in modern security platforms is already doing work that humans could never do at scale, including:

  • Correlating hundreds of thousands of attack chains to predict the next intrusion and block them before they execute.
  • Summarizing complex endpoint or SOC incidents into analyst-ready narratives, cutting triage time and reducing cognitive load.
  • Powering assistants that unify telemetry, threat intelligence, and investigation tooling into a single, guided interface for analysts.

In fact, last year we introduced the industry’s first incident prediction capability that extends Adaptive Protection, a unique feature of Symantec Endpoint Security Complete (SES-C), to help stop Living Off the Land attacks. Using AI, we were able to create a predictive model that is able to take any attack data that we start to receive,  predict the attackers’ next four steps, and share that information with the analysts. We saw, in our real world deployment of the technology, accurate predictions available in 80% of the incidents. So instead of just isolating endpoints and reimaging them, you're able to halt the next stage of the attack, surgically stopping the attacker, who will move on to the next target that's not you.

What’s next

The nightmare scenario - AI-powered attacks of hitherto unseen levels of sophistication - has yet to arrive. When it does,  defenders will need new detection models capable of spotting subtle, emergent behaviors rather than known signatures or straightforward anomaly spikes.

Two fronts will define whether defenders maintain their edge:

  • Hardening public AI platforms: Many current AI-enabled attacks piggyback on public models whose safeguards can be bypassed with relatively simple prompt engineering, making them tempting tools for threat actors. Stronger security controls and robust abuse monitoring at the provider level could raise attacker costs and shrink the practical value of these platforms for offensive use
  • Operationalizing AI in the SOC: Security teams that successfully embed AI into workflows—alert triage, investigation, containment, and hunting—will be better positioned to withstand higher alert volumes and more automated adversaries. That includes using AI assistants to strip away toil, applying predictive models to prioritize response, and continuously training systems on fresh, real-world attacks. 

To learn more about how AI can benefit your SOC operations, register for our upcoming global webinar, Reality vs. Hype: How To Put AI To Work in Your SOC Today, scheduled for Wednesday January 28 and Thursday January 29 (depending on your time zone.) Attendees will get early access to our upcoming eBook for AI in the SOC.

You might also enjoy

Explore Upcoming Events

Find experts in the wild

See what's next