Want to get the most out of ChatGPT?

ChatGPT is a superpower if you know how to use it correctly.

Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.

Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.

The European Parliament voted its negotiating position on March 26 (569 in favour), the Council adopted its mandate on March 13, and trilogues on the Digital Omnibus are now running against a hard wall: August 2, 2026 — the original high-risk application date. Unless an amended Regulation enters into force before that date, the original AI Act timeline bites exactly as drafted in 2024. The Commission's proposal (COM(2025) 836) introduces long-stop dates of December 2, 2027 for Annex III high-risk systems and August 2, 2028 for product-embedded high-risk systems — but only if standards, common specifications, or Commission guidelines actually ship. France and Germany started pushing this path at the Berlin digital sovereignty summit in November 2025; the political pressure has only intensified since.

If you read the coverage, you might conclude the AI Act got delayed.

If you read the text, you conclude the opposite: the Commission just made compliance effective dates dependent on whether harmonised standards exist. That is a very different animal from a straight postponement. It means organisations waiting on a delay have been handed homework — and a deadline that moves only if they don't do it.

This edition unpacks what the Omnibus actually changes, why two new zero-days matter more than they look, and — in the Deep Dive — the planning move that turns a 15-month runway into competitive advantage rather than deferred panic.

And on Friday, we ship a Build Lab special edition: the complete walkthrough for building your own EU AI Act compliance knowledge base on markdown + Qdrant, with a downloadable starter bundle. Subscriber-only download link. [Details at the bottom.]

TL;DR
  • Trilogues on the Digital Omnibus (COM(2025) 836) are running against Aug 2, 2026 — the original high-risk application date. The proposal ties compliance effective dates to harmonised standards availability, with long-stop dates of Dec 2, 2027 (Annex III high-risk) and Aug 2, 2028 (product-embedded). Parliament voted its mandate Mar 26 (569 in favour); Council mandate adopted Mar 13. If no amended Regulation enters into force before Aug 2, the original 2024 timeline holds.

  • AWS Interconnect — multicloud hit general availability April 13. Private cross-cloud links to Google Cloud are shipping now; Azure follows later in 2026. This turns multicloud from a bespoke engineering project into a purchasing decision — directly relevant to regulator conversations about lock-in and resilience.

  • Meta's Muse Spark, released April 8 by Meta Superintelligence Labs, is natively multimodal and proprietary — Meta's first closed frontier model, breaking with the Llama open-source strategy. For EU enterprises evaluating vendors, this adds a fourth serious API option and raises Article 13 transparency questions the open models didn't trigger as hard.

  • Two active exploits remain loose: Docker Engine CVE-2026-34040 (AuthZ plugin bypass, CVSS 8.8, patched Mar 25–27 in Engine 29.3.1 / Desktop 4.66.1) and Chrome CVE-2026-5281 (use-after-free in WebGPU/Dawn, patched Apr 1 in 146.0.7680.177). If either is unpatched in your estate, stop reading this and go patch.

EU Digital Omnibus Readiness Scorecard
EU Digital Omnibus Readiness Scorecard
15-question self-assessment covering the AI Act, GDPR, NIS2, and DORA changes from the EU Digital Omnibus simplification package. Score your organisation's readiness in 20 minutes. Includes...
$0.00 usd

Protect online privacy from the very first click

Your digital footprint begins long before you understand what it means. “Free” Big Tech inboxes like Gmail scan your emails to fuel advertising, personalize content, and build data profiles. Proton Mail offers truly “free” email. Free from data profiling. Free from tracking. Free from ads. And free to use.

The Brief

The Digital Omnibus: Why "Delayed" Is the Wrong Word

The Digital Omnibus (COM(2025) 836) was published by the Commission on November 19, 2025. It is not a delay. It is a conditional trigger: compliance effective dates become dependent on whether harmonised standards, common specifications, or Commission guidelines actually ship. The long-stop dates are firm — December 2, 2027 for Annex III high-risk systems, August 2, 2028 for high-risk AI embedded in products under Annex I sectoral legislation. Parliament voted its mandate on March 26 (569 in favour); Council on March 13. Trilogues are running against the only hard deadline that matters: August 2, 2026, when the original high-risk obligations kick in unless an amended Regulation is already in force.

For any CTO with an August 2026 compliance deadline in the roadmap, you now plan for both scenarios. Scenario A: trilogues conclude late, no amended Regulation in force by Aug 2 → original timeline bites exactly as drafted. Scenario B: Omnibus passes → you have a 15-month runway, but only if the standards you depend on actually ship on time.

What this does NOT change: the prohibited practices in Article 5 are already enforceable. GPAI obligations under Articles 51–55 are already active. National competent authority designations are already binding. The Omnibus touches the high-risk operational obligations, not the foundational framework.

Why it matters: The organisations that treat this as "delayed, relax" will face Scenario A if trilogues stall, or Scenario B with standards arriving six months before the long-stop — months, not years, to close compliance gaps. The ones that use the runway properly turn governance into a selling point while competitors are still arguing about the deadline.

AWS Interconnect Multicloud: Resilience Moves from Architecture to Purchase Order

On April 13, AWS announced general availability of AWS Interconnect — multicloud, establishing private high-speed dedicated connections between Amazon VPCs and other cloud providers. Google Cloud is the launch partner. Microsoft Azure is scheduled later in 2026. The service prices by capacity and region, positions against private leased lines and SD-WAN overlays, and — critically — does not require customer-owned network infrastructure between the clouds.

For regulated EU enterprises, this is more significant than it reads. Multicloud has been the dominant architecture pattern for firms that cannot place all workloads with a single US hyperscaler — whether for DORA financial-sector resilience requirements, for sovereign cloud commitments with national governments, or for the simple business case of not being locked to one vendor's pricing. Until now, engineering that multicloud setup required either a dedicated network engineering team or a bespoke interconnect deal. AWS just made it a purchase order.

Why it matters: The conversation with your regulator about resilience just shifted from "we have a plan to architect multicloud" to "we have multicloud links in production with one provider and two more available." The argument about systemic dependency on one hyperscaler is now harder to make — and harder to defend against when your board asks why you haven't used it.

Meta Muse Spark: The Proprietary Pivot That Changes Vendor Math

On April 8, Meta Superintelligence Labs (under Alexandr Wang) released Muse Spark — the division's first frontier model. It is natively multimodal (text, voice, image input; text output), supports multi-agent orchestration and visual chain-of-thought reasoning, and is proprietary. This is a deliberate break from Meta's Llama open-source strategy. API access is in private preview for select partners; US consumer rollout is underway.

For European enterprise procurement, this restructures the vendor landscape. The established short list — OpenAI, Anthropic, Google — now gets a fourth serious contender with deep capital, massive compute, and an obvious edge in multimodal workloads. But Muse Spark being closed has downstream AI Act implications: Article 13 transparency obligations bite differently for closed systems, and the documentation and logging requirements that Llama customers could partially satisfy through inspection of weights and training data now shift fully onto vendor-supplied documentation.

Why it matters: If your governance framework was built around "open models where possible, closed models where necessary" — a defensible default under Article 13 — Muse Spark forces a rewrite. You now evaluate Meta on the same closed-vendor documentation criteria you apply to OpenAI, not on the more permissive framework you used for Llama.

Two Active Exploits You Cannot Afford to Track Passively

Docker CVE-2026-34040 (CVSS 8.8) allows a single oversized HTTP request to silently bypass AuthZ plugins and create a privileged container with full host filesystem access. Patched March 25–27 in Docker Engine 29.3.1 and Docker Desktop 4.66.1. Any unpatched CI/CD pipeline, Kubernetes cluster using Docker as runtime, or developer workstation is a full host-compromise vector. Cloud credentials, SSH keys, and kubeconfig files are directly exposed.

Chrome CVE-2026-5281 is a use-after-free in the WebGPU/Dawn layer enabling arbitrary code execution, confirmed exploited in the wild. Patched April 1 in Chrome 146.0.7680.177/178. CISA added it to the Known Exploited Vulnerabilities catalog with a federal agency patch deadline of April 15 — today at the time of this edition.

In the context of Project Glasswing — Anthropic's withheld Claude Mythos model that found thousands of zero-days across every major OS and browser — these two are a reminder that the offensive capability gap is already narrowing in production. You do not have a 2027 problem. You have an April 2026 problem.

Why it matters: Patch cadence and enforcement is a governance question now, not an IT one. Article 15 of the AI Act on cybersecurity, the upcoming NIST SP 800-53 agent overlays, and every major SOC-2 and ISO 27001 audit cycle will ask: "Do you patch exploited-in-the-wild vulnerabilities within days or within quarters?" If your answer is "quarters," that becomes a control finding, not a timeline preference.

Deep Dive

When the Regulator Blinks, Build the Knowledge Base

This week's Deep Dive is a planning argument, not reporting. It sets up Friday's Build Lab bundle.

The political economy of the Digital Omnibus is simple. France and Germany have national AI industries — Mistral, Aleph Alpha, Light&Wonder adjacents — that would be disproportionately burdened by the original high-risk timeline relative to US hyperscaler incumbents. The freeze is protectionism in governance clothing. The Commission, caught between the protectionist push and the credibility cost of moving its own deadline, produced the Digital Omnibus: a conditional mechanism that looks like compromise but shifts the burden onto CEN-CENELEC JTC 21 and the member states.

This matters because the political story — "EU blinked, deadlines slipped" — will dominate boardroom conversations for the next month. And it will be wrong in the way that gets companies caught in Scenario B.

The Planning Move: Don't Plan Around the Deadline. Plan Around the Readiness.

Every organisation with meaningful AI exposure has been running a two-phase compliance programme: gap assessment (what do we have, what's missing) and remediation (close the gap before the deadline). The received wisdom under the Omnibus is to slow the remediation phase. That is exactly backwards.

The actual move is to decouple gap assessment from remediation entirely, and to accelerate a third phase that most programmes haven't started yet: compliance infrastructure. Compliance infrastructure is the set of repeatable, auditable, queryable systems that produce compliance artefacts continuously, not as one-off deliverables for a deadline. It is the difference between a risk management report (Article 9) written once and a risk management system that generates the report automatically when the underlying AI system changes.

Three compliance infrastructure pieces matter more than anything else right now:

1. A regulatory knowledge base. You cannot make sound classification decisions under Article 6 — which categorises your systems by risk tier — without a structured, queryable corpus of the regulation, its Annexes, EC guidance, AI Office circulars, and the sector-specific overlays (DORA for financial services, NIS2 for critical infrastructure, MDR for medical devices). Most teams are still grepping PDFs. That is the bottleneck. (This is exactly what Friday's Build Lab ships — a complete walkthrough to build one in a weekend using Karpathy's knowledge base architecture, zero vector database required.)

2. A system inventory that talks to the knowledge base. For each AI system in production or pilot, you want a living record with its purpose, training data provenance, risk classification, human oversight model, documentation state, and change history. When Article 6 classification guidance shifts — and it will, as the AI Office publishes circulars — you want to re-classify your portfolio in hours, not weeks.

3. A logging and reporting layer that produces Article 12–13 artefacts on demand. Article 12 requires automatic event logs for high-risk systems. Article 13 requires transparency documentation. Most organisations will implement these as discrete deliverables against a deadline. The ones who build them as continuous systems ship faster, audit cleaner, and handle amendments with a change order rather than a project.

Why This Turns Delay Into Advantage

Here is the strategic shape of the 15-month runway. Start it in April 2026 with the three infrastructure pieces above. By end of 2026, you have a working knowledge base, a system inventory, and automated compliance artefact generation. Through 2027 you operate and refine those systems while the rest of the market is still arguing about the deadline. When compliance effective dates hit — whether Dec 2, 2027 or the original Aug 2026 if the Omnibus fails — you have twelve months of operational evidence, not a last-quarter scramble.

And here is the part the compliance consultancies won't tell you: this infrastructure is a product. The classification engine you build internally is the MVP of a consulting offering. The regulatory knowledge base is a service you can sell to your supply chain partners. The logging layer is a SaaS wedge. Organisations building this well in 2026 will be selling it in 2028.

The question is not whether to use the runway. It is whether to spend it building deliverables or building systems.

Source: This is thesis, not reporting. The Build Lab bundle shipping Friday operationalises the "regulatory knowledge base" piece end-to-end.

The One Call to Make

Before next Thursday, email your head of engineering and your head of compliance on the same thread with one question: "If the EU AI Act deadline moves to December 2027, do we slow our AI governance work — or do we use the runway to build infrastructure we couldn't justify under the original timeline?"

Why this: The Digital Omnibus is the first time both of these leaders have a reason to talk about AI governance as a strategic programme rather than a compliance checkbox. Most organisations will have this conversation in silos — engineering plans to slow, compliance plans to catch its breath — and the infrastructure opportunity gets missed entirely. A single shared thread forces the trade-off into the open.

If the conversation gets deferred: That is the answer. Log the date. Your organisation has just decided, by default, to run Scenario B.

If you skip it: Twelve months from now, the CEN-CENELEC standards will publish, the Omnibus will pass or fail, and whoever is in the room on that day will discover they have four to fifteen months to do work that competitors started planning in April 2026.

Builder Spotlight

Holistic AI — The Governance Platform Taking the Omnibus Seriously

Profiling teams building for the European AI reality.

The company: Holistic AI, London, UK
What they do: Enterprise AI governance platform — identify, protect, enforce.
Why now: When the Omnibus trilogues conclude, the organisations that win are the ones who have already wired governance into their AI lifecycle — not the ones who start shopping for tools the week standards get published.

Founded in 2020 at University College London by Adriano Soares Koshiyama (CS PhD) and Emre Kazim (philosophy postdoc), Holistic AI emerged from an interdisciplinary research thesis that is now, six years later, exactly the right product for the AI Act moment. The platform is organised around three integrated modules. Identify auto-discovers models, agents, APIs, pipelines, and workflows across the estate and tracks their lifecycle. Protect runs continuous testing for bias, hallucination, toxicity, privacy leaks, drift, and adversarial attack, plus runtime observability through log analysis and workflow tracing. Enforce translates policy into code — deployment gates, approvals, kill switches, guardian agents — aligned to internal standards and the regulations you actually operate under. Risk is mapped natively to NIST AI RMF, ISO 42001, and the EU AI Act.

Backers include Mozilla Ventures, Premji Invest, Dallas Venture Capital, Grow London, and Kickstart Innovation. Gartner named Holistic AI a Representative Vendor in its March 2026 Market Guide for Guardian Agents — the category that, six months from now, your procurement team will be required to fill.

The differentiator against US competitors like Credo AI is not feature depth. It is regulatory posture: Holistic AI was built on AI Act risk taxonomy from the start, not retrofitted to it. For EU enterprises building toward Scenario A or B, that matters.

Why this profile, not Anthropic: Anthropic is already a main character in our coverage — Glasswing, Mythos, and a dedicated Project Glasswing follow-up in the pipeline for April 23. Builder Spotlight exists to surface operators European CTOs haven't heard of yet. Anthropic doesn't need the amplification; a London-based governance platform absolutely does.

Learn more: Holistic AI

This Week in Tech

OpenAI IPO Signals Strengthen — No S-1 Yet

OpenAI closed a $122B funding round on March 31 at an $852B post-money valuation. No S-1 has been filed as of April 15. A pre-IPO investor document flagging Microsoft dependency and TSMC supply chain risk has circulated among institutional investors but is not a public filing. Target window remains H2 2026 to Q1 2027.

Why it matters: An $852B valuation pre-IPO anchors the public comparable set for every enterprise AI vendor you negotiate with. Your procurement team should refresh pricing benchmarks now — the anchor is going to pull prices up before it corrects.

Anthropic "Mythos" Stays Internal

A leaked dataset in late March revealed Anthropic's internal model "Claude Mythos" — described as a step-change capability jump. Access as of April 15 remains restricted to approximately 50 partner companies. No public release, no API. The stance is consistent with the Project Glasswing posture: the capability exists, the deployment discipline is deliberate.

Why it matters: Enterprise buyers evaluating Anthropic are buying a governance posture, not just a model. That is a competitive moat that regulators, auditors, and insurers will start pricing into vendor selection.

NIST COSAiS — The Control Overlay Project You Should Already Be Tracking

NIST's COSAiS (Control Overlays for Securing AI Systems) — the work most people are still incorrectly calling "the SP 800-53 agent overlay" — shipped a concept paper in August 2025 and an annotated outline discussion draft in January 2026 for the "Using and Fine-Tuning Predictive AI" use case (feedback deadline was February 13). The project scope includes agentic AI (single-agent and multi-agent), but the initial public draft timing is not yet announced on csrc.nist.gov/projects/cosais.

Why it matters: COSAiS will flow directly into US federal procurement criteria and indirectly into most enterprise security frameworks worldwide. Organisations deploying AI agents should have a named commenter ready — when the initial public draft lands, the comment window is short and the output shapes what your auditor will be checking for in 2027. If your security team hasn't read the January 2026 annotated outline yet, that's this week's homework.

Build Lab — New Series

Friday Drop: Build Your Own EU AI Act Compliance Knowledge Base

Tomorrow — Friday, April 17 at 09:00 CET — a special Build Lab edition ships with the complete walkthrough to build an EU AI Act compliance knowledge base on plain markdown + optional Qdrant. No vector database required. No RAG pipeline hell. Karpathy's three-folder architecture applied to regulation work.

The edition includes a downloadable starter bundle:

  • Empty scaffold (one-minute setup)

  • All seven compilation prompts (copy-paste into Claude)

  • Five pre-compiled wiki articles (Article 6, 9, 10, 14, 55 — the anchors)

  • n8n workflow export for automated refresh on EUR-Lex changes

  • Obsidian-compatible structure for human review

Subscribers get the download link directly in the edition.

If you are on the fence about whether to upgrade your compliance workflow from "grep PDFs" to "structured queryable knowledge base" — the Omnibus just gave you the runway. Friday's drop shows you how to use it.

Next Steps

What to read now?

  1. EUR-Lex — COM(2025) 836 Digital Omnibus on AI The primary source on the proposed mechanism. Read the Article 113 amendments specifically — that's where the standards-dependency trigger lives.

  2. European Parliament — Legislative Train: Digital Omnibus on AI The best single tracker of where trilogues stand. Updated weekly; bookmark it through August.

  3. AWS Interconnect — Multicloud GA Read for the pricing model and the service limits. The decision about whether to buy is now a capacity-planning decision, not an architecture decision.

  4. Meta — Introducing Muse Spark Read for the documentation commitments. They will set the precedent for how Article 13 gets evaluated against closed proprietary models.

That’s it for this week.

This week's edition had a regulator blinking, a hyperscaler rewriting multicloud procurement, a fourth frontier model entering the enterprise buy cycle, and two live exploits that most teams still haven't patched. The through-line is the same as every edition of OnAbout.AI: governance is infrastructure now. The organisations that build it in 2026 will be selling it in 2028.

Until next Thursday, João

OnAbout.AI delivers strategic AI analysis to enterprise technology leaders. European governance lens. Vendor-agnostic. Actionable.

If this landed in your inbox from a forward — subscribe here to get the full picture every week.

Keep Reading