The timing is not subtle. In February, SaaS stocks took what the NCSC called "a billion-dollar wobble" as investors realised that an engineer with Claude or Cursor could replace a $50K/year subscription in two hours. The NCSC is not warning about a future risk. It is responding to something already happening inside your organisation — whether your security team knows about it or not.

This edition unpacks what the NCSC actually said, why it matters more for European enterprises than anyone else, and the five-layer governance framework that turns vibe coding from a shadow IT problem into a managed capability.

TL;DR
  • The NCSC says banning vibe coding is futile — the business case is too strong. Security must engage early or spend a decade playing catch-up, exactly like cloud adoption in 2005.

  • AI-generated code is "unreliable, difficult to maintain, and prone to security flaws" according to the NCSC — but the answer is governance, not prohibition.

  • The EU AI Act Article 4 already requires AI literacy for all staff operating AI systems. Vibe coding without governance is not just risky — in Europe, it may be non-compliant.

  • Anthropic shipped Claude Code "auto mode" this week — autonomous coding with fewer human approvals. The tools are accelerating faster than the governance frameworks around them.

Attio is the AI CRM for modern teams.

Connect your email and calendar and Attio instantly builds your CRM. Every contact, every company, every conversation — organized in one place. Then ask it anything. No more digging, no more data entry. Just answers.

The Brief

The NCSC's Real Message: You Already Lost the Ban Fight

NCSC CEO Richard Horne told RSAC attendees: "The attractions of vibe coding are clear. Disrupting the status quo of manually produced software that is consistently vulnerable is a huge opportunity, but not without risk of its own." The key word is "disrupting." The NCSC is not framing vibe coding as a threat to manage. It is framing it as an inevitability to govern.

The agency cited anecdotal evidence of developers building SaaS replacements in hours rather than renewing subscriptions. One startup received a renewal quote at double the price; an engineer vibe-coded a replacement the same afternoon. Multiply that across every enterprise with developers who have API access to Claude, GPT-4, or Gemini, and you have a shadow development problem that no acceptable-use policy will contain.

The NCSC's recommendation: stop writing policies that say "don't" and start building platforms that say "do, but safely."

Do now: Ask your CISO this week: "Do we know how many developers are using AI code generation tools? Do we have telemetry on it?" If the answer is no, you have a shadow IT problem you cannot see.

AI Code Tools Are Getting More Autonomous, Not Less

Anthropic shipped "auto mode" for Claude Code this week. The update allows the AI to execute multi-step coding tasks with fewer human approval gates. The framing from Anthropic: "more control, but kept on a leash." The practical reality: the leash is getting longer with every release.

This follows a pattern across every major AI lab. OpenAI's Codex, Google's Jules, and now Anthropic's Claude Code are all moving toward autonomous agents that write, test, and deploy code with minimal human intervention. Each update reduces the friction between "I have an idea" and "it is running in production."

For security teams, this is the core tension. Every reduction in friction that makes developers faster also reduces the surface area where human review catches vulnerabilities. The NCSC's companion blog made this explicit: within five years, AI-generated code will run in production that no human has ever reviewed.

Do now: Map your current CI/CD pipeline. Identify every point where a human currently reviews code before it reaches production. Then ask: which of these gates would survive if a developer used an AI agent to generate, test, and submit a pull request autonomously?

The Cloud Adoption Parallel the NCSC Wants You to Take Seriously

The NCSC drew a direct line between vibe coding today and cloud adoption in 2005. The parallel is instructive — and uncomfortable. Twenty years ago, security professionals who dismissed cloud as a fad spent the next decade dealing with misconfigurations, shared responsibility confusion, and shadow IT. The ones who engaged early shaped the shared responsibility model, influenced provider security standards, and built cloud security into architecture rather than bolting it on afterwards.

The NCSC's warning: "The landscape will evolve without this crucial input, as was arguably the case in the early years of cloud adoption." Translation: if security does not engage with vibe coding now, the defaults will be set without security input. And defaults, once established, are extremely difficult to change.

The European advantage here is real. DORA, NIS2, and the EU AI Act already create regulatory expectations around AI system governance, code quality, and operational resilience. European security teams have a framework to work from. The rest of the world is improvising.

Do now: Pull your organisation's cloud security policy from 2015. Note how many of those controls were reactive rather than proactive. Now ask: are we about to repeat the same pattern with AI-generated code?

Supply Chain Attacks Are Getting Worse — And Vibe Coded Dependencies Make Them Harder to Trace

TeamPCP, a threat group with links to Lapsus$, compromised GitHub Action tags this week, then pivoted to NPM, Docker Hub, VS Code extensions, and PyPI. The attack chain exploited the trust relationships between open source packages — exactly the kind of dependencies that vibe-coded applications pull in without human review.

This is the intersection the NCSC did not say out loud but implied heavily: vibe coding does not just generate code. It generates dependency trees. An AI agent building a web application will pull in dozens of NPM packages, Docker base images, and third-party libraries. If the developer never reviewed the code, they certainly never reviewed the dependency manifest.

For enterprises running software composition analysis (SCA) tools, this is manageable — assuming the tools are in the pipeline. For teams where vibe-coded prototypes bypass the standard CI/CD process and go straight to production, it is a supply chain attack waiting to happen.

Do now: Verify that your SCA tooling covers every deployment path — not just the official CI/CD pipeline. If developers can deploy from local machines or AI agent outputs, those paths need the same scanning.

Agentic AI Governance Is No Longer Theoretical

SecurityWeek published a deep analysis this week on why agentic AI systems — including OpenClaw, the open-source agent framework backed by NVIDIA — need governance frameworks before they reach production. The argument: these platforms are shifting from passive recommendation tools to autonomous action-takers with real system access. An AI agent that can write code, execute commands, and modify infrastructure is not an assistant. It is an operator.

The EU AI Act's risk classification framework was designed before agentic AI went mainstream, but the principles apply directly. An autonomous coding agent that deploys to production without human review could qualify as high-risk under Annex III if it operates in critical infrastructure contexts. At minimum, Article 4's AI literacy requirements mean every developer using these tools needs to understand what they are deploying and the risks involved.

The governance gap is real: most enterprises have acceptable-use policies for AI chatbots but no governance framework for AI agents that act autonomously. The tools are shipping faster than the policies.

Do now: Check whether your AI governance policy covers autonomous agents — not just chatbots and copilots. If it only addresses "AI-assisted" workflows, it does not cover the agent paradigm. Update it.

LiteLLM Backdoored on PyPI — The Vibe Coding Supply Chain Attack Is Already Here

On March 24, TeamPCP — the same threat group behind this week's broader open-source campaign — published two backdoored versions of LiteLLM (1.82.7 and 1.82.8) to PyPI. LiteLLM is the Python library that lets developers route LLM API calls across providers. It sits in 36% of cloud environments. It has three million daily downloads. And for approximately three hours, anyone who installed or updated it got a credential harvester, a Kubernetes lateral movement toolkit, and a persistent backdoor.

The attack chain is a case study in cascading supply chain compromise. TeamPCP first compromised the Trivy GitHub Action (a security scanner, of all things). LiteLLM's CI/CD pipeline ran Trivy without a pinned version. The compromised action exfiltrated LiteLLM's PyPI publishing token from the GitHub Actions runner. TeamPCP then used that token to publish the backdoored packages. Version 1.82.8 was particularly insidious: it used Python's .pth file mechanism to execute malicious code on every Python invocation system-wide — not just when LiteLLM was imported.

The payload targeted everything: AWS, GCP, and Azure credentials, SSH keys, Kubernetes configs, CI/CD secrets, database credentials, and cryptocurrency wallets. All exfiltrated over AES-256 encrypted channels to attacker-controlled domains disguised as legitimate infrastructure (models.litellm.cloud).

This is exactly the scenario the NCSC warned about. A developer vibe-codes an application that calls multiple LLM APIs. They install LiteLLM because that is what every tutorial recommends. They do not pin the version. They do not review the dependency. Three hours later, their cloud credentials are in someone else's hands.

Do now: Check whether LiteLLM versions 1.82.7 or 1.82.8 were ever installed in any environment you control. Run pip show litellm across your development, staging, and production environments. If either version was present, treat it as a credential compromise — rotate all secrets that were accessible from that machine.

The NCSC's "Full Court Press" Doctrine and What It Means for Enterprise Security

In his second RSAC address, Horne laid out a three-tiered cyber defence model: near (your own systems), mid (shared infrastructure like cloud), and far (disrupting attackers on their own networks). He argued that "no one action will solve it" and called for "sustained, collective pressure across multiple fronts."

For enterprise CISOs, the "mid" tier is the one that matters most right now. Shared infrastructure increasingly includes AI model providers, code generation platforms, and the open-source ecosystem that both depend on. Your attack surface is no longer just your own code and your own cloud — it includes the AI tools your developers use and the models those tools run on.

The practical implication: third-party risk management must now include AI tooling vendors. If your developers use GitHub Copilot, Claude Code, or Cursor, those vendors are in your supply chain. Their model training data, their security practices, and their update cadence all affect your risk posture.

Do now: Add AI code generation tools to your next third-party risk assessment cycle. Treat them as you would any other software vendor in your supply chain — because that is exactly what they are.

Hiring in 8 countries shouldn't require 8 different processes

This guide from Deel breaks down how to build one global hiring system. You’ll learn about assessment frameworks that scale, how to do headcount planning across regions, and even intake processes that work everywhere. As HR pros know, hiring in one country is hard enough. So let this free global hiring guide give you the tools you need to avoid global hiring headaches.

Builder Spotlight

EVERVAULT — Secure by Default, Not by Afterthought

Profiling teams building for the European AI reality.

The company: Evervault, Dublin, Ireland What they do: Developer-first encryption platform — sensitive data is encrypted before it touches your infrastructure, eliminating an entire class of exposure. Why now: When vibe-coded applications handle sensitive data without human review, "secure by default" stops being a design principle and becomes a survival requirement. Evervault bakes encryption into the platform layer so the code does not need to get it right.

Founded by Shane Curran — who won Ireland's BT Young Scientist prize at 17 for a quantum-resistant encryption project — Evervault just closed a $25M Series B led by Ribbit Capital with participation from Index Ventures, Sequoia, and Kleiner Perkins. The company processes over $5B in transaction volume annually, generates 100M+ encrypted tokens monthly, and integrates with 7,000+ banks and financial institutions. Customers report cutting PCI DSS compliance costs by $100K and achieving compliance 95% faster.

The connection to this week's theme is direct. The NCSC called for "guardrails baked into the platform layer, not bolted on after the fact." Evervault is exactly that pattern applied to data security. When an AI agent generates a payment flow or a data processing pipeline, Evervault ensures the sensitive data is encrypted by default — regardless of whether the code was written by a human, Claude, or Cursor. The developer does not need to implement encryption correctly. The platform handles it.

For enterprise teams implementing Layer 2 of the governance framework we described in the Deep Dive — platform-level guardrails that make secure the default, not the exception — Evervault is the model to study.

Deep Dive

The Five-Layer Governance Framework for Secure Vibe Coding

The NCSC's guidance is clear on the destination — govern vibe coding, do not ban it — but deliberately vague on the how. That is not a criticism. Prescriptive guidance from a national security agency ages poorly. But it leaves European enterprises in a familiar position: regulatory intent without operational specificity.

Here is a five-layer framework that translates the NCSC's principles into something a CISO can operationalise this quarter.

Layer 1: Model Provenance and Selection

The NCSC stated that "AI tools we use to develop code must be designed and trained from the outset so that they do not introduce or propagate unintended vulnerabilities." This is a model selection problem before it is a code review problem.

Not all code generation models are equal. Models trained on datasets that include vulnerable code will reproduce vulnerable patterns. Models fine-tuned on security-reviewed codebases produce measurably safer output. The difference is not marginal — research from Stanford and ETH Zurich has shown that developers using AI assistants produce less secure code than those writing manually, precisely because the models were not optimised for security.

Enterprises need a model approval process — a shortlist of sanctioned code generation models, evaluated against security benchmarks, with documented provenance. This is no different from approving which cloud providers or SaaS tools your organisation uses. The EU AI Act's transparency requirements (Article 13) and the requirement for technical documentation (Article 11) provide the regulatory backbone.

So what? If you do not choose which models your developers use, they will choose for themselves. And they will choose on speed, not security. Build the approved model list now, before shadow adoption makes it irrelevant.

Layer 2: Platform-Level Guardrails

The NCSC's recommendation to bake "guardrails into the platform layer, not bolted on after the fact" is the most operationally important sentence in either publication. It means security controls must live in the development environment, not in a policy document.

Concretely: if your organisation sanctions Claude Code or GitHub Copilot, configure them with organisation-level policies. Restrict which repositories agents can access. Enforce that generated code passes static analysis before it can be committed. Require dependency scanning on every AI-generated pull request. Block direct-to-production deployment paths that bypass review.

These are not new capabilities. Most enterprise development platforms already support policy-as-code, branch protection rules, and automated scanning gates. The difference is applying them explicitly to AI-generated code paths — which, in many organisations, are not yet distinguished from human-written code in the pipeline.

So what? The cheapest security intervention is the one that happens before the code is written. Platform-level guardrails catch vulnerabilities at the point of generation, not at the point of deployment. Invest here first.

Layer 3: AI-Augmented Code Review

The NCSC acknowledged that human code review of all AI-generated output is already impractical and will become impossible. The alternative is not "no review" — it is AI-powered review. Use AI to check AI.

This is where the tooling market is moving fastest. GitHub's code scanning, Snyk's AI-powered SAST, and Semgrep's pattern matching are all evolving to handle AI-generated code patterns. The key difference from traditional SAST: AI-generated code tends to be syntactically correct but semantically vulnerable. It compiles. It passes unit tests. It introduces a subtle authentication bypass that a pattern-matching scanner misses.

The next generation of code review tools uses LLMs to understand intent, not just syntax. They can ask: "This function handles user authentication but does not validate the session token — is that intentional?" That is the kind of review that catches what static analysis cannot.

So what? Budget for AI-augmented code review tooling in your next security tooling refresh. The tools that catch AI-generated vulnerabilities are not the same tools that catch human-written ones. Your current SAST investment is necessary but not sufficient.

Layer 4: Dependency and Supply Chain Governance

This week's TeamPCP attack — compromising GitHub Actions, NPM, Docker Hub, VS Code, and PyPI in a single campaign — is a preview of what supply chain attacks look like in a vibe-coded world. When an AI agent generates an application, it makes dependency choices that a human developer might question but a non-technical product manager will not.

The governance layer here is software composition analysis (SCA) applied to every output path, not just the main CI/CD pipeline. If a developer can vibe-code a prototype on their laptop and deploy it to a staging environment, that path needs the same dependency scanning as the production pipeline. If an AI agent pulls in a Docker base image, that image needs the same vulnerability assessment as any other container in your registry.

DORA Article 28 already requires financial entities to assess concentration risk in ICT service providers. Extend that logic to AI-generated dependency trees. If every vibe-coded application in your organisation pulls from the same NPM ecosystem, you have concentration risk in your open-source supply chain.

So what? SCA coverage gaps are the most likely vector for a supply chain compromise via vibe-coded applications. Audit your coverage this quarter. If there are deployment paths that bypass scanning, close them.

Layer 5: Audit Trail and Regulatory Compliance

The EU AI Act Article 12 requires logging for high-risk AI systems. Even if your vibe coding tools do not individually qualify as high-risk, the composite system — AI generating code that runs in production — may. At minimum, Article 4's AI literacy requirement means your developers need to understand what these tools are doing.

Build the audit trail now: which model generated which code, when, with what prompt, and through which review process. This is not paranoia — it is the evidence your compliance team will need when regulators ask how AI-generated code entered your production environment. And they will ask.

The organisations that build this logging infrastructure now will treat regulatory inquiries as a reporting exercise. The ones that do not will treat them as an incident.

So what? Implement generation metadata logging for all AI-assisted code. At minimum: model name and version, timestamp, developer identity, review status, and deployment path. This is cheap to build now and expensive to reconstruct later.

Next Steps

  1. This week: Ask your CISO whether the organisation has telemetry on AI code generation tool usage. If the answer is "we don't know," that is your first finding.

  2. This month: Run a tabletop exercise: "A vibe-coded internal tool deployed to production introduces a vulnerability that leads to a data breach. Walk through the incident response, the regulatory notification, and the evidence trail." Identify the gaps.

  3. This quarter: Establish a model approval process for code generation tools, extend SCA coverage to all deployment paths, and implement generation metadata logging. These three actions close the largest governance gaps before the tools outrun the policies.

This Week in Tech

OpenAI Shuts Down Sora App After Failing to Build a Social Platform

OpenAI's TikTok-inspired video generation app, launched in October 2025, is shutting down. The company acknowledged "there was not sustained interest in an AI-only social feed." The underlying Sora 2 model survives as infrastructure, but the consumer product is dead. It lasted five months.

Why it matters: The lesson is not about video generation. It is about distribution. Building AI capabilities is table stakes. Building products that people return to daily is still hard. European enterprises evaluating AI vendors should watch for this pattern: impressive demos that do not survive contact with real user behaviour.

TeamPCP Campaign Expands Beyond LiteLLM — Trivy, Checkmarx, Docker Hub, and VS Code Hit

The LiteLLM backdoor (covered in The Brief) was not an isolated incident. TeamPCP's campaign compromised the Trivy security scanner's GitHub Action, pivoted to NPM, Docker Hub, VS Code extensions, and PyPI in a single coordinated operation. The irony: a security scanning tool became the entry point for a supply chain attack. Snyk and Datadog Security Labs have published detailed technical analyses of the full campaign.

Why it matters: The attack targeted AI infrastructure specifically — LiteLLM is the most popular LLM API routing library. As enterprises adopt AI tooling, the open-source packages that underpin that tooling become high-value targets. Your AI supply chain is now an attack surface.

FCC Bans Foreign-Made Consumer Routers Over National Security Concerns

The US Federal Communications Commission prohibited the import of consumer routers manufactured outside the United States, citing "unacceptable risk" to national security. The ban targets vulnerabilities that enable network surveillance and botnet recruitment.

Why it matters: This is hardware-level supply chain governance — the physical equivalent of the software supply chain attacks described above. European enterprises should watch for similar moves from ENISA or national regulators. The NIS2 directive already creates the regulatory basis for hardware security requirements in critical infrastructure.

EU Digital Omnibus Readiness Scorecard

EU Digital Omnibus Readiness Scorecard

15-question self-assessment covering the AI Act, GDPR, NIS2, and DORA changes from the EU Digital Omnibus simplification package. Score your organisation's readiness in 20 minutes. Includes...

$0.00 usd
Next Steps

What to read now?

  1. "Vibe Coding Could Reshape SaaS Industry and Add Security Risks" — The Record The most complete reporting on the NCSC's dual publications. Read this for the direct quotes from Richard Horne and the SaaS market context.

  2. "Why Agentic AI Systems Need Better Governance" — SecurityWeek Deep analysis of OpenClaw and the governance gap between AI agents and the policies meant to govern them. Essential reading if your organisation is evaluating autonomous coding tools.

  3. "UK Cyber Chief Urges Full Court Press" — The Record Horne's full RSAC speech on the three-tiered defence model. The "mid" tier — shared infrastructure including AI tools — is the part enterprise CISOs should focus on.

That’s it for this week.

The NCSC gave us the diagnosis: vibe coding is coming, and security must shape it or chase it. The EU AI Act gives European enterprises the regulatory mandate to act first. The question is whether your organisation treats this as a compliance exercise or a competitive advantage. We believe it is the latter.

Until next Thursday, João

OnAbout.AI delivers strategic AI analysis to enterprise technology leaders. European governance lens. Vendor-agnostic. Actionable.

If this landed in your inbox from a forward — subscribe here to get the full picture every week.

Keep Reading