AI Alert
AI-Generated Phishing: The Cost Collapse of Spearphishing
threat-brief

AI-Generated Phishing and the Collapse of Spearphishing Cost

Crafting a convincing, personalized phishing email once required hours of research per target. Large language models have reduced that cost to seconds. This post examines the economics of AI-assisted phishing, 2024-2025 incident data, and what defenders can measure.

By Theo Voss · · 8 min read

Spearphishing — personalized, targeted phishing — has always had a cost asymmetry in the defender’s favor. Crafting a convincing email that references the target’s recent activities, uses their colleagues’ names correctly, matches their organization’s communication style, and arrives at the right moment required hours of open-source intelligence (OSINT) research per target. Threat actors running spearphishing campaigns at scale faced a real bottleneck in the human labor required to personalize attacks.

Large language models have effectively eliminated that bottleneck. The economics of spearphishing have changed fundamentally, and the 2024-2025 incident data is beginning to reflect this change.

The Old Economics vs. The New

Before LLMs: A financially motivated threat actor building a business email compromise (BEC) campaign would spend hours per high-value target — researching the target’s role, their manager’s name, recent company events, typical email formatting, active vendor relationships. The per-target cost of a high-quality spearphish might be $50-200 in labor equivalent. This constrained campaigns to the highest-value targets.

With LLMs: The same research can be automated: scrape LinkedIn, public company filings, press releases, and social media. Feed the results to an LLM with a prompt like “Write a convincing email from [CEO name] to [CFO name] requesting an urgent wire transfer for [recent acquisition that appeared in news], using a style that matches these example emails from the company’s public communications.” The LLM produces a grammatically polished, contextually accurate email in seconds. The per-target cost has dropped to under $1 in API costs.

This is not hypothetical. The UK’s National Cyber Security Centre documented this threat vector explicitly in their January 2024 AI threat report, noting that AI is “enabling relatively unskilled threat actors to carry out more effective access and information gathering operations, and will enhance the impact of social engineering attacks.”

2024-2025 Incident Data

Security vendors have begun reporting measurable changes in phishing campaign characteristics that align with LLM-assisted generation:

Grammar and spelling quality. Traditionally, non-native English speakers conducting phishing campaigns produced emails with characteristic grammatical errors — a useful (if imperfect) detection signal. Abnormal Security and Proofpoint both reported in 2024 that the proportion of phishing emails containing grammatical or spelling errors dropped significantly. Campaigns originating from threat actors previously identifiable by poor English quality began producing fluent, idiomatic text.

Personalization at volume. IBM X-Force’s 2025 Threat Intelligence Index documented BEC campaigns in 2024 that combined bulk volume with high degrees of personalization — a combination that was economically impossible before LLM-assisted generation. Targets received emails referencing specific recent transactions, named colleagues, and situationally appropriate context at volumes consistent with automated generation.

Multilingual expansion. Threat actors previously limited to their native language have expanded into other languages. Groups historically operating exclusively in Russian have been observed conducting high-quality English and German-language phishing campaigns. Groups previously limited to English have expanded into Japanese and Korean markets. LLM translation and generation enables language expansion without human translators.

Synthetic persona development. Researchers at multiple security vendors documented campaigns using AI-generated personas for the first time at scale in 2024: fake LinkedIn profiles with AI-generated photos, synthetic email histories, and LLM-maintained conversational backstories used for multi-step social engineering attacks (building rapport before the phishing ask).

Business Email Compromise: The Highest-Impact Vector

BEC — where attackers impersonate executives or trusted vendors to redirect payments — remains the highest-dollar-value cybercrime category reported to the FBI. The FBI’s 2024 IC3 report recorded over $3 billion in BEC losses in the United States alone.

LLM assistance amplifies BEC at both ends:

Volume and personalization. BEC attacks targeting mid-market companies (which may not have sophisticated email security) can now be produced at the volume previously reserved for generic phishing, with the personalization quality previously reserved for targeted attacks on high-value individuals.

Real-time adaptation. LLM-assisted BEC operators can maintain multi-turn email conversations that adapt to the target’s responses in real time, sustaining convincing impersonation across a thread rather than relying on a single email.

What Changed for Defenders

The quality improvement in AI-generated phishing degrades several traditional detection signals:

Detection signals that remain meaningful:

For organizations tracking the threat landscape on AI-abused capabilities, the AI Incidents database tracks incidents where AI tools were used offensively. The adversarialml.dev research digest covers academic work on AI-enabled attack capabilities.

Sources

Sources

  1. NCSC: The Near-Term Impact of AI on the Cyber Threat (2024)
  2. IBM X-Force Threat Intelligence Index 2025
  3. Abnormal Security: AI-Generated BEC Attack Report 2024
#phishing #spearphishing #social-engineering #llm-abuse #threat-intelligence #bec #enterprise-security
Subscribe

AI Alert — in your inbox

AI incidents and vulnerabilities — tracked, sourced, dated. — delivered when there's something worth your inbox.

No spam. Unsubscribe anytime.

Related

Comments