An incident and vulnerability tracker for AI/ML systems. Model leaks, training-data exposures, jailbreak disclosures, ML library CVEs, vendor breaches, and confirmed prompt-injection-in-the-wild — each entry linked to a primary source, dated, and tagged for filtering.
From the FBI's May 2025 warning on AI voice attacks targeting US officials to NIST's synthetic content framework, here is what detection technology actually delivers — and where the gaps remain.
A practitioner-focused breakdown of generative AI risks mapped against NIST AI 600-1 and the OWASP Top 10 for LLMs — prompt injection, data poisoning, supply-chain compromise, and mitigation priorities.
A technical overview of machine learning security threats in 2026: NIST's adversarial ML taxonomy, MITRE ATLAS attack classes, the CVE-2025-62164 vLLM deserialization flaw, and actionable defense posture for security teams.
A practitioner's overview of OpenAI security in 2026: their bug bounty program, CNA status, the November 2025 Mixpanel breach, and what security teams operating on OpenAI's platform need to monitor.
AI Alert is part of a 26-site editorial network covering adversarial ML, AI governance, defensive tooling, and ops engineering — all open access.
AI incidents and vulnerabilities — tracked, sourced, dated. — delivered when there's something worth your inbox.
No spam. Unsubscribe anytime.