AI Alert
AI Alert

AI incidents and vulnerabilities — tracked, sourced, dated.

An incident and vulnerability tracker for AI/ML systems. Model leaks, training-data exposures, jailbreak disclosures, ML library CVEs, vendor breaches, and confirmed prompt-injection-in-the-wild — each entry linked to a primary source, dated, and tagged for filtering.

Posts
36
Topics
15
Updated
May 9
RAG Poisoning: Retrieval-Augmented Generation Attack Techniques
This week's headliner

RAG Poisoning: How Retrieval-Augmented Generation Systems Get Compromised

RAG systems inherit all the vulnerabilities of LLMs and add a new one: the retrieval corpus. Injecting malicious content into retrieved sources can hijack model behavior in ways users and operators don't see coming.

May 9, 2026
analysis

AI Agent Security Incidents: What Happened When Autonomous AI Went Wrong

A documented review of security incidents involving autonomous AI agents in 2024-2025, covering tool misuse, privilege escalation via injection, and the architectural patterns that created the exposure.

threat-brief

AI-Generated Phishing and the Collapse of Spearphishing Cost

Crafting a convincing, personalized phishing email once required hours of research per target. Large language models have reduced that cost to seconds. This post examines the economics of AI-assisted phishing, 2024-2025 incident data, and what defenders can measure.

analysis

What Red Teamers Are Finding in 2026: LLM Defense Gaps and Recurring Failure Modes

Enterprise LLM deployments are being red-teamed at scale for the first time. Security practitioners find consistent failure patterns — misconfigured system prompts, inadequate output filtering, and agentic privilege escalation paths operators didn't anticipate.

Almanac

CISA's Known Exploited Vulnerabilities Catalog: What It Tells Us About AI/ML Security Compromised Models on Hugging Face: Pickle Exploits in the Model Hub CVE Roundup: AI/ML Infrastructure Vulnerabilities — Q1 2026 Hugging Face Security Incidents: Malicious Models, Stolen Tokens, and Hub Exposure Major Jailbreak Techniques of 2025: Disclosures, Patches, and What Persists Model File Format Vulnerabilities: Pickle, ONNX, and the SafeTensors Migration OWASP LLM Top 10 2025: What Changed and Why It Matters Prompt Injection via Email: How AI Agents Get Hijacked Through Your Inbox
Subscribe

AI Alert — in your inbox

AI incidents and vulnerabilities — tracked, sourced, dated. — delivered when there's something worth your inbox.

No spam. Unsubscribe anytime.