Arms Race: AI's Impact on Cybersecurity

New whitepaper explores how both attackers and defenders are using the latest AI technologies to achieve their goals.

  • Attackers are already using AI, particularly to create phishing materials and write code.
  • The arrival of Agentic AI is more likely to affect the quantity of attacks more than the quality
  • Defenders have longer experience in using AI

The emergence of artificial intelligence (AI) has shaken up the world of cybersecurity for both defenders and cybercriminals, presenting both new challenges and powerful defensive opportunities, as the Symantec and Carbon Black Threat Hunter team explore in a new whitepaper.

The rapid adoption of generative AI (Gen AI) by malicious actors has accelerated this arms race between attackers and defenders, with all the evidence pointing to AI-assisted attacks becoming increasingly sophisticated and widespread. However, the same technology is simultaneously empowering defenders with advanced threat detection and response capabilities.

In this new whitepaper, we explore the ways threat actors have begun exploiting Gen AI to enhance their malicious activities in a variety of ways. We look at these developments under three main headings - AI and phishing, AI and malware development, and the emergence of Agentic AI. We also explore how defenders have used and are continuing to use AI to enhance cybersecurity.​

AI and phishing

One of the main ways we have seen attackers using Large Language Models (LLMs) most effectively is the creation of phishing materials - emails, lure documents etc. This is because LLMs can help many attackers overcome one of their key weaknesses, which is often that they are non-native English speakers trying to target native English speakers. LLMs can help overcome these issues by offering natural language translation, writing emails, correcting grammar, adjusting tone, and more. 

While most LLMs are now built with key safety features that try to stop them being used for malicious purposes, cybercriminals continue to try and find ways to abuse the software for their own means. While LLMs won’t simply “write a phishing email” if asked, prompts can be crafted in such a way as to get the LLM to produce an email that could be used for phishing.

Figure 1: Using a prompt to get ChatGPT to write a phishing-style email
Figure 1: Using a prompt to get ChatGPT to write a phishing-style email

LLMs have also further lowered the barrier to entry for those carrying out phishing attacks by making phishing-as-a-service (PaaS) services even more straightforward to use and tailor to your needs, opening the pool of potential attackers to even more, lower-skilled individuals. 

In this whitepaper, we demonstrate how we were able to get Gemini and ChatGPT to help us write phishing-style emails using simple prompts, as well as how the translation capabilities of LLMs can be leveraged by malicious actors to help them make their phishing campaigns more effective.

AI and malware development

Attackers have also attempted to leverage the capabilities of Gen AI to develop malware, with varying degrees of success.

Researchers from Symantec and Carbon Black published a blog in July 2024 detailing how they had observed an increase in attacks that appeared to leverage LLMs to generate malicious code to download various malicious payloads. This campaign involved phishing emails containing code used to download various payloads, including Rhadamanthys, NetSupport, CleanUpLoader (Broomstick, Oyster), ModiLoader (DBatLoader), LokiBot, and Dunihi (H-Worm). Analysis of the scripts used to deliver malware in these attacks suggested they were generated using LLMs.  

While it can often be difficult to determine if something is human-generated or produced by an LLM, certain characteristics can point towards something being machine generated. The scripts’ structure, comments after each line of code, and choice of function names and variables can be strong clues that the threat actor used GenAI to create the malware. 

Figure 2: Clues in the malware’s code can indicate that AI was used to help write it
Figure 2: Clues in the malware’s code can indicate that AI was used to help write it

In March 2025, researchers at Tenable investigated if DeepSeek R1 could help it develop malware, such as keyloggers and ransomware. While DeepSeek initially refused to help the researchers create a keylogger or ransomware due to guardrails aiming to prevent it being used for malicious purposes, the researchers were able to overcome these qualms relatively easily by telling DeepSeek they were creating this malware for “educational purposes only.” The researchers did manage to convince DeepSeek to help it develop a keylogger and ransomware, however, the code for both had to be manually edited by a human before it worked successfully. 

Another notable development documented in the 2025 Cato CTRL Threat Report, saw researchers developing a new technique that they dubbed “Immersive World,” which used narrative engineering to bypass LLMs’ security controls. Using this technique, a researcher who had no coding experience was able to develop a fully functional infostealer for Google Chrome. However, in that case too, the LLM did require some feedback and guidance from humans to successfully develop the infostealer. While LLMs can assist and speed up the process of developing code and writing malware, they are not yet sophisticated enough for humans to be removed from the developmental process entirely.

While this whitepaper does primarily focus on attackers attempting to exploit legitimate AI technology for their own ends, we do also discuss some of the attacker-developed LLMs that have emerged too, such as Xanthorox AI, which promises its users an “unmonitored, and highly customizable AI experience.” Powerful, attacker-controlled tools like this could prove invaluable to cybercriminals, lowering barriers to entry for carrying out attacks, and allowing them to carry out more malicious activity in shorter timeframes.

Agentic AI: New avenues of attack?

The arrival of AI agents, or Agentic AI, in 2025, heralded new possibilities for potential abuse by attackers. An agent is built on top of an LLM and can reason and autonomously perform tasks with minimal user involvement. Their introduction creates the possibility that attackers could leverage AI to execute malicious actions rather than simply using it to advise or assist. The Threat Hunter Team carried out a research project using OpenAI’s ChatGPT Agent (formerly Operator) shortly after it was introduced earlier this year. We were aiming to establish if the agent could be used to carry out an attack end-to-end, with minimal human intervention. For the purposes of the exercise, we asked Operator to: identify who performed a specific role in our organization; find out their email address; create a PowerShell script designed to gather system information and email it to them using a convincing lure. With a small amount of tweaking and guidance, Operator was able to carry out the task relatively autonomously.

Agents such as Operator demonstrate both the potential of AI and some of the possible risks. While agents may ultimately enhance productivity, they also present new avenues for attackers to exploit. The technology is still in its infancy, and the malicious tasks it can perform are still relatively straightforward compared to what may be done by a skilled attacker.

However, the pace of advancements in this field means it may not be long before agents become a lot more powerful. It is easy to imagine a scenario where an attacker could simply instruct one to “breach Acme Corp” and the agent will determine the optimal steps before carrying them out. This could include writing and compiling executables, setting up command-and-control infrastructure, and maintaining active, multi-day persistence on the targeted network. Such functionality would massively reduce the barriers to entry for attackers.

Gen AI is not without its risks and vulnerabilities and, like almost all online operations, is open for abuse, and we discuss some of the ways we have seen AI being manipulated in the whitepaper, as well as some of the malicious attacks we have seen leveraging AI.

Leveraging AI in defense: Decades of experience

Importantly, we also look at how AI is used to help defenders guard against malicious activity, be they AI-powered or not. From the introduction of Bloodhound heuristic technology, to Incident Prediction, Symantec and Carbon Black have used AI technology to stay ahead of attackers for many years. Incident Prediction aims to literally do this by leveraging AI in a unique way to identify and disrupt sophisticated attacks before they happen. Trained on a catalog of more than 500,000 attack chains built by the Threat Hunter Team, Incident Prediction puts the advantage back in defenders’ hands by predicting attackers' behaviors, preventing their next move in the attack chain (even when they’re using living off the land techniques), and quickly returning the organization to its normal state.

Learn about Incident Prediction and the myriad other ways Symantec and Carbon Black are using AI to help protect our customers by reading our whitepaper.

Explore Upcoming Events

Find experts in the wild

See what's next