Tools
A curated directory of 14 tools we use, evaluate, and recommend across the AI security landscape — with our take on each.
Incident Trackers
AI Incident Database (Partnership on AI)
Our take
The reference incident database. Submit reports; cite entries; cross-reference with our coverage.
OECD AI Incidents Monitor
Our take
Use as a credibility signal. If OECD has an entry, the incident has been independently verified by multiple sources.
MITRE ATLAS Case Studies
Our take
Case studies are smaller in number but deeper in technical detail than other databases.
AVID (AI Vulnerability Database)
Our take
Complements MITRE ATLAS. Use both.
Newsletters & Aggregators
tldrsec
Our take
Required reading. Curation quality is high; filters noise effectively.
Risky.Biz
Our take
The podcast is one of the few infosec shows that gets AI right. The newsletter is solid.
Embedded.ai
Our take
Less day-to-day operational; more policy and capability landscape.
Vendor Advisory Pages
OpenAI Trust Portal
Our take
Subscribe to RSS. Vendor-tier disclosures appear here before press coverage.
Anthropic Trust Center
Our take
Same playbook as OpenAI. Subscribe to updates.
Hugging Face Security
Our take
Most ML supply-chain CVEs land here first.
Regulatory Trackers
EU AI Act compliance tracker
Our take
Better than the official EUR-Lex source for following implementation pace.
Stanford HAI AI Index
Our take
The single best annual snapshot of where AI is. Cite it; reference it.
Detection & Forensics
GPTZero
Our take
Detection accuracy is unreliable enough that we don't recommend using it for high-stakes decisions. Inform readers; don't gate on it.
DetectGPT
Our take
Better methodology than commercial tools but suffers the same fundamental problem: detection is brittle to paraphrase and editing.