AI Alert
incident

Deepfake Cybersecurity Incidents: Five Confirmed Cases and the Patterns They Expose

A working catalog of confirmed deepfake cybersecurity incidents from 2024 and 2025 — from Arup's $25M loss to Ferrari's averted scam — and the controls that decided each outcome.

By AI Alert Desk · · 8 min read

Deepfake cybersecurity moved from theoretical risk to a documented loss category in early 2024, when a single sequence of transactions at engineering firm Arup moved $25 million out of corporate accounts after a synthesized video call. The pattern has not stopped. Confirmed cases now span luxury goods, advertising holding companies, and engineering, and an organized state-affiliated workforce in North Korea is using deepfake interviews to obtain remote IT roles inside US companies. This post catalogs five publicly confirmed deepfake cybersecurity incidents from 2024 and 2025, identifies what each attack actually targeted, and isolates the controls that decided the outcome.

For broader context on AI-enabled fraud and adversarial AI incidents, AI Incidents Org maintains an open record of confirmed cases.

Case 1: Arup — $25 million wire fraud via multi-party deepfake video call

In January 2024, a finance employee in Arup’s Hong Kong office completed 15 wire transfers totaling HK$200 million (approximately $25.6 million) to five Hong Kong bank accounts after participating in a video conference that featured deepfake representations of the company’s CFO and several colleagues. The transfers were not flagged until the employee later raised the meeting with a senior UK contact who had no knowledge of it. The case was confirmed publicly by Arup in May 2024 and reported by CNN Business. No funds have been recovered. The initial access vector was a spear-phishing email; the deepfake call functioned as the confirmation channel that defeated the employee’s initial suspicion. A World Economic Forum retrospective identified the core procedural failure: no mandatory out-of-band verification existed for transfers in the amount range actually moved.

Case 2: Ferrari — averted CEO deepfake voice scam

In July 2024, a Ferrari executive received WhatsApp messages and a call featuring what appeared to be CEO Benedetto Vigna’s voice, urging confidentiality on an acquisition that needed immediate action. The executive picked up on slight intonation artifacts in the cloned voice and asked the caller the title of a book the real Vigna had recently recommended. The attacker could not answer. The call ended. As reported by Fortune, no funds moved and no information was disclosed. The control that worked was a shared knowledge challenge that did not exist in any digital record an attacker could obtain through open-source collection.

Case 3: WPP — unsuccessful deepfake video call targeting an executive

In May 2024, attackers built a WhatsApp account using a publicly available image of WPP CEO Mark Read and arranged a Microsoft Teams meeting with another senior WPP executive. The meeting included an AI-cloned voice of Read and used YouTube footage to construct a video presence. The attackers reinforced the impersonation through Teams chat while soliciting personal details and asking the target to set up a new business entity. The target identified the inconsistencies and disengaged. The case is documented as Incident 983 in the AI Incident Database. WPP later confirmed no money or information was lost.

Case 4: North Korean IT workers — deepfake interviews for credential access

The FBI’s 2025 alert on North Korean IT Worker Threats to U.S. Businesses describes an organized state-sponsored pattern. Operatives use AI face-swap and voice tools to pass remote video interviews for legitimate IT roles at US companies, then operate those roles using laptop farms run by US-based facilitators. The Department of Justice in December 2024 indicted 14 North Koreans connected to schemes that generated at least $88 million across six years. A coordinated June 2025 enforcement action included searches of 29 laptop farms across 16 states. The FBI estimates more than 300 US firms have unknowingly hired operatives. This case differs from the others in scope: the deepfake is not the entire attack, but the credentialing step that opens the network. A countermeasure that has worked in practice, reported by The Register, is a directly political interview question; the candidate’s reaction is observably distinct from a Western applicant’s.

Case 5: Cross-vendor voice cloning attempts on finance functions

Through 2024 and 2025, security teams at financial services firms recorded a steady volume of attempted attacks following the Arup template at smaller scales. Few of these reach public disclosure because they were intercepted before money moved. The aggregate pattern is consistent: an inbound email or messaging-app contact from a supposed senior executive, followed by a voice or short video confirmation that overcomes the receiver’s initial channel suspicion. The January 2025 NSA/CISA/FBI joint advisory titled Strengthening Multimedia Integrity in the Generative AI Era explicitly recommends C2PA Content Credentials adoption for government and critical infrastructure media pipelines as a structural counter to this class of attack. Cryptographic provenance is the only signal in the chain that does not depend on the receiver detecting synthetic media manually.

Patterns across the five cases

A few operational signals emerge from the outcomes:

The five-incident pattern points in one direction: the controls that decide deepfake cybersecurity outcomes are procedural and verification-based, not detection-based. Out-of-band confirmation for financial workflows, shared-knowledge challenges for high-stakes executive communications, and identity proofing with liveness in hiring pipelines are the categories where the publicly confirmed wins are concentrated. Organizations without those controls in place are running the Arup configuration.


Sources

For more context, AI security digest covers related topics in depth.

Sources

  1. Fortune: Ferrari executive foils deepfake CEO voice scam
  2. CNN: Arup revealed as victim of $25 million deepfake scam
  3. AI Incident Database: Incident 983 (WPP CEO deepfake)
  4. FBI: North Korean IT Worker Threats to U.S. Businesses
  5. NSA/CISA/FBI: Strengthening Multimedia Integrity in the Generative AI Era
#deepfakes #incident-catalog #social-engineering #bec #dprk-it-workers #ai-security
Subscribe

AI Alert — in your inbox

AI incidents and vulnerabilities — tracked, sourced, dated. — delivered when there's something worth your inbox.

No spam. Unsubscribe anytime.

Related

Comments