While enterprise AI teams were focused on Anthropic's plugin launch and Nvidia's record earnings this week, the European Commission's AI Office did something far more consequential with far less fanfare: it made the AI Act enforceable from the inside out. The AI Act Whistleblower Tool—live since November but largely unnoticed by enterprise teams—gives anyone professionally connected to your AI deployment a direct, anonymous, encrypted line to Brussels. They can report violations, upload evidence, track progress, and answer follow-up questions. All without revealing their identity.
The same week, Microsoft shipped a Security Dashboard for AI that surfaces the shadow AI most enterprises can't see. Cyberhaven's latest data confirmed that 39.7% of all AI interactions involve sensitive data and one-third of employees use AI through personal accounts. And OpenAI's own COO admitted what many CIOs already know: AI hasn't actually penetrated enterprise business processes yet—even as Anthropic shipped the plugin marketplace designed to change that.
The thread connecting all of it: the governance infrastructure is being built—by regulators, by vendors, by your own employees—whether your organisation is participating or not.
TL;DR
Anthropic launches enterprise agents — OpenAI's COO admits the gap: Claude Cowork ships 13 plugins and private agent marketplaces. The same day, OpenAI's Brad Lightcap concedes AI hasn't penetrated enterprise processes. The race shifts from model capability to workflow integration
Nvidia Blackwell Ultra delivers 50x performance for agentic AI: GB300 NVL72 benchmarks show 35x lower cost per token vs. Hopper. Record $68.1B quarterly revenue. Meta separately commits $60B to AMD chips. Hyperscaler infrastructure spend tops $650B for 2026
Microsoft Security Dashboard for AI enters public preview: Unified governance console surfaces shadow AI, correlates identity/threat/data signals, and tracks posture drift across agents, models, and MCP servers. No additional licensing required
Cyberhaven: 82% of top GenAI tools classified medium-to-critical risk: 2026 AI Adoption & Risk Report reveals shadow AI governance gap widening. 39.7% of AI interactions involve sensitive data. One-third of employees access AI via personal accounts
EU Commission shifts to active GPAI enforcement monitoring: AI Office moving from legislative drafting to compliance monitoring of general-purpose AI providers. Article 6 high-risk guidelines delayed to March/April 2026
Google DeepMind ships Project Genie — the first consumer world model: Genie 3 generates real-time navigable 3D environments from text prompts. Not enterprise-ready yet, but the underlying capability signals where physical-world AI is heading
Want to get the most out of ChatGPT?
ChatGPT is a superpower if you know how to use it correctly.
Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.
Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.
The Brief
1. Anthropic Launches Enterprise Agents — While OpenAI's COO Admits the Gap
On Tuesday, Anthropic unveiled its most aggressive enterprise push yet. Claude Cowork now ships 13 new MCP connectors—Google Workspace (Drive, Calendar, Gmail), DocuSign, Apollo, Clay, Outreach, SimilarWeb, MSCI, LegalZoom, FactSet, WordPress, and Harvey—alongside department-specific plugins for finance, legal, HR, and engineering.
The real strategic move: private plugin marketplaces. Enterprises can now build, host, and distribute custom AI agents through their own internal marketplace, with admin controls over which plugins teams can access. Anthropic is positioning Claude not as a chatbot but as what William Blair called "a platform-level intelligence layer across enterprise workflows."
The market reaction told the story. IBM shares fell 13.2% on February 23—the sharpest single-day decline since October 2000—after Anthropic published a post about using Claude Code to modernise COBOL. Thomson Reuters surged 11% after the enterprise briefing. DocuSign, LegalZoom, and FactSet also rallied on what analysts called an "integration premium."
The timing makes the contrast stark. The same day, OpenAI COO Brad Lightcap told TechCrunch: "We have not yet really seen AI penetrate enterprise business processes." His solution: partnerships with BCG, McKinsey, Accenture, and Capgemini to bridge the deployment gap. One company is shipping infrastructure; the other is hiring consultants. Both approaches acknowledge the same problem—the bottleneck isn't model intelligence, it's workflow integration—but the strategic bets are fundamentally different.
Kate Jensen, leading Anthropic's enterprise effort, framed it directly: "2025 was meant to be the year agents transformed the enterprise, but the hype turned out to be mostly premature. It wasn't a failure of effort. It was a failure of approach."
Gartner projects 40% of enterprise applications will embed AI agents by end of 2026, up from less than 5% in 2025—an 800% increase in a single year. The gap between that projection and Lightcap's admission is where enterprise risk concentrates.
Do now: The plugin marketplace model matters more than the individual plugins. It shifts AI governance from "which tools do we allow?" to "how do we govern the distribution and access of AI agents across the organisation?" That's the architecture question your AI governance framework needs to answer—before your employees answer it for you with shadow agents.
2. Nvidia Blackwell Ultra: 50x Performance, 35x Lower Cost — The Agentic AI Economics Shift
Nvidia released SemiAnalysis InferenceX benchmark data showing its GB300 NVL72 systems with Blackwell Ultra GPUs deliver up to 50x higher throughput per megawatt and 35x lower cost per token compared to the previous Hopper platform for low-latency agentic AI workloads. The same day, Nvidia posted record Q4 fiscal 2026 revenue of $68.1 billion—up 73% year-over-year—with full-year revenue hitting $215.9 billion.
The hardware specifics: Blackwell Ultra delivers 1.5x higher NVFP4 compute performance and 2x faster attention processing over the original Blackwell. The GB300 NVL72 rack-scale solution—priced around $3 million per rack—is already in production at Microsoft (which deployed the first large-scale GB300 cluster), CoreWeave, and Oracle. AWS, Google Cloud, and Azure will offer Blackwell Ultra instances. Shipment projections suggest up to 60,000 racks in 2026, with a Q3 pull-forward to meet surging demand.
The software layer matters as much as the silicon. Nvidia simultaneously launched Dynamo, an open-source inference framework designed to maximise token throughput for reasoning AI models. Software optimisations from Nvidia's TensorRT-LLM and Dynamo teams have delivered up to 5x better performance on existing GB200 systems in just four months—gains that compound with the hardware improvements in GB300.
The broader infrastructure picture reinforces the scale: Meta separately committed $60 billion to AMD AI chips over five years—the largest single AI hardware deal ever—on top of its existing ~$50 billion Nvidia arrangement. Combined hyperscaler infrastructure spending for 2026 now exceeds $650 billion, nearly double last year. The energy constraint is becoming the real bottleneck—Meta's AMD deal alone specifies up to 6 gigawatts of deployment capacity, enough to power 4.5 million homes.
The agentic AI angle is the enterprise story. AI coding assistants and agents now account for roughly 50% of all software-programming-related AI queries, up from 11% a year ago. When agentic AI becomes 35x cheaper to run, the economics shift from "can we afford to deploy agents?" to "can we afford not to?"
Do now: The Blackwell Ultra benchmarks reset enterprise cost modelling. The 35x cost reduction for agentic workloads means on-premise and sovereign cloud deployments that were previously uneconomical may now pencil out. For European enterprises evaluating sovereign AI infrastructure—where data residency requirements add cost—the efficiency gains are strategically significant. Factor GB300 availability into your procurement timeline; supply is constrained but ramping. And watch the energy dimension: European data centre capacity, already under sovereignty pressure, faces new competition as hyperscalers expand.
Sources: HPCwire, Nvidia Blog, Nvidia Newsroom, GlobeNewsWire - Nvidia Q4 Earnings, Yahoo Finance - Meta-AMD Deal
3. Microsoft Security Dashboard for AI: The Shadow AI Problem Gets a Control Plane
On February 16, Microsoft released its Security Dashboard for AI in public preview—a centralised console that consolidates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview into a single governance view. It's the first major platform vendor to ship a unified AI governance dashboard as a standard feature rather than an add-on.
The dashboard surfaces what most enterprises can't currently see: a comprehensive inventory of AI agents, models, MCP servers, and applications across Microsoft and third-party stacks. It detects shadow AI—unmanaged AI applications used by employees without IT oversight—and tracks posture drift when previously compliant agents change behaviour or data-access patterns.
The risk correlation engine connects signals that typically live in separate tools: linking an agent's access to sensitive data (Purview) with anomalous network flows (Defender) and misconfigured service principals (Entra). Microsoft's own data backs the urgency: 53% of security professionals say their current AI risk management needs improvement, and 32% of data security incidents now involve generative AI tools.
The licensing model is notable: no additional cost beyond existing Microsoft Security products. Microsoft is positioning AI governance as a retention play—if you're already in the Microsoft security ecosystem, governance comes included.
Do now: If you run Microsoft Defender, Entra, and Purview, activate the preview immediately. It won't solve your entire AI governance challenge—it's scoped to the Microsoft telemetry ecosystem—but it gives you the AI asset inventory that 75% of CIOs reported lacking last week. For multi-cloud environments, treat it as one layer in a broader governance stack, not the complete answer.
4. Cyberhaven Report: The Shadow AI Governance Gap Is Widening
Cyberhaven Labs' 2026 AI Adoption & Risk Report, based on billions of real-world data movements across 222 companies, confirms what last week's CIO data implied: enterprise AI adoption is fragmenting faster than governance can follow.
The headline numbers are stark. The top 1% of early-adopter organisations now use over 300 GenAI tools, while conservative enterprises typically use fewer than 15. In frontier companies, 71.4% of employees use GenAI daily; in cautious enterprises, just 2.5%. The gap isn't closing—it's accelerating.
The governance problem: 82% of the top 100 most-used GenAI SaaS applications are classified as medium-to-critical risk. One-third of employees access AI tools through personal accounts—58% of Claude users and 60% of Perplexity users operate outside corporate SSO, bypassing centralized logging, retention policies, and training-data controls. And 39.7% of all AI interactions involve sensitive data, meaning the average employee inputs proprietary information into AI tools once every three days.
The emerging risk frontier: Chinese open-weight models (DeepSeek, Qwen) now account for 50% of all endpoint-based AI usage. Nearly half of developers use coding assistants. And 23% of enterprises have adopted agent-building platforms—creating custom AI workflows that sit entirely outside traditional IT governance.
Do now: Three actions. (1) Audit personal-account AI usage—if a third of your employees are using AI outside corporate controls, your data governance framework has a hole you can't see. (2) Classify your GenAI tool inventory against Cyberhaven's risk framework—82% medium-to-critical risk means most of your tools haven't been properly assessed. (3) Extend governance to coding assistants and agent-building platforms—they're the next shadow AI wave, and 77% of enterprises haven't addressed them yet.
5. EU Commission Shifts From Drafting to Enforcing — But the Gaps Persist
The European Commission's AI Office is entering a new phase: active compliance monitoring of general-purpose AI (GPAI) model providers. With GPAI obligations in effect since August 2025, the shift from legislative drafting to enforcement signals that the regulatory apparatus is operational—even if incomplete.
The complications persist. The Commission missed its own February 2 deadline for Article 6 guidelines on high-risk AI system classification, with final adoption now expected in March or April 2026. The Digital Omnibus proposal may push some high-risk deadlines by up to 16 months for sectors where harmonised standards aren't ready—but it's still only a proposal moving through Council and Parliament.
The practical impact: enforcement uncertainty at the moment governance matters most. Member states must establish at least one AI regulatory sandbox by August 2026. Penalties for non-compliance reach up to 7% of global annual turnover or €35 million. Yet the implementation guidance that enterprises need to achieve compliance keeps slipping.
For European enterprise leaders, the message is paradoxical but clear: the enforcement date is fixed, even if the implementation details aren't. Build for the law as written, not for the simplification you hope arrives.
Do now: Track two timelines simultaneously. The first: August 2, 2026—full applicability of the AI Act, including high-risk system obligations. The second: the Digital Omnibus legislative process, which may provide extensions for specific sectors. Don't assume the Omnibus will cover you. Plan for August compliance; treat any extension as a bonus, not a baseline.
Sources: EU Commission AI Act Implementation, IAPP, Cooley
6. Google DeepMind Ships Project Genie — A Glimpse of Where Physical AI Is Heading
Google DeepMind released Project Genie to AI Ultra subscribers in the U.S.—the first time a world model has been made available as a consumer product. Built on Genie 3, an 11-billion-parameter autoregressive transformer, the system generates real-time navigable 3D environments at 720p resolution and 24 frames per second from text prompts. Describe an environment, specify how you want to move through it—walking, flying, driving—and Genie generates the world ahead of you in real time as you navigate.
The system works in three modes: World Sketching (generate a source image that becomes a navigable environment), exploration (move through generated worlds), and remixing (modify environments mid-session). Sessions are currently limited to 60 seconds due to compute constraints—auto-regressive world generation is extraordinarily expensive to run. Access requires Google's $250/month AI Ultra plan and U.S. residency.
The strategic significance isn't the consumer product—it's what world models represent for AI's trajectory. Google frames Genie as a step toward AGI-grade environmental understanding: building agents that navigate complex real-world scenarios requires AI that can model and predict physical environments, not just generate text. Siemens' Digital Twin Composer, announced at CES 2026, is pursuing the same thesis from the industrial side. When you connect world models to Nvidia's 35x agentic cost reduction (story 2), a trajectory emerges: physical-world AI simulation—for manufacturing, logistics, autonomous systems, drug discovery—moves from research to deployment as the economics improve.
Do now: Project Genie isn't enterprise-ready—60-second sessions at $250/month aren't a procurement decision. But if your AI strategy includes physical-world applications—digital twins, autonomous systems testing, scenario planning—track world model development as a category. The compute costs will follow the same deflationary curve as language model inference. The question is whether your use cases are ready when the economics arrive.
Sources: Google Blog, TechCrunch, The Decoder, Engadget
Builder Spotlight
Holistic AI — From UCL Research to Enterprise Governance Platform
Profiling teams building for the European AI reality.
Last week we looked at Zurich-based Modulos AG and their ISO 42001 first-mover position. This week: London-based Holistic AI, which started at University College London and is building the governance platform for the compliance wave about to hit.
Founded in 2020 by Dr. Adriano Koshiyama and Dr. Emre Kazim—both UCL researchers—Holistic AI built an end-to-end AI governance platform that covers the full lifecycle: discovery, risk assessment, testing, monitoring, and compliance. The platform maps risk to NIST AI RMF, ISO 42001, and the EU AI Act simultaneously, which matters because enterprises operating across jurisdictions need governance that works against multiple frameworks, not just one.
The company reached $8 million in revenue in 2025 with a 73-person team—a signal that AI governance tooling is generating real enterprise demand, not just pilot interest. Backers include Dallas Venture Capital, Mozilla Ventures, and Premji Invest. The timing is deliberate: with the EU AI Act's full applicability hitting August 2026 and penalties reaching 7% of global turnover, the company is positioning its platform as the bridge between where enterprises are today (fragmented governance, limited visibility) and where the regulation expects them to be in five months.
The industry recognition supports the positioning. Gartner named Holistic AI a Cool Vendor for AI Security. IDC included it in their ProductScape for Worldwide Generative AI Governance Platforms. Avasant rated it a Leader in their 2025 Responsible AI Platforms RadarView. The analyst validation matters because it signals to enterprise procurement teams that AI governance platforms are an established category, not an experiment.
What distinguishes Holistic AI in the current landscape: the emphasis on third-party AI evaluation. Most enterprises don't just need to govern their own AI—they need to assess AI systems from vendors, partners, and suppliers. When 74% of CIOs regret vendor decisions and 82% of top GenAI tools are classified medium-to-critical risk, the ability to evaluate third-party AI risk isn't a nice-to-have. It's the governance capability most enterprises are missing.
The broader signal, reinforcing last week's thesis: AI governance is consolidating from a consulting engagement into a platform category. Holistic AI (London), Modulos (Zurich), Credo AI (US), and others are building the infrastructure layer that will sit between enterprise AI deployments and regulatory requirements. The question isn't whether organisations will need this tooling. It's whether they'll have it deployed before August.
Deep Dive
The Governance Infrastructure Is Being Built — Whether You Participate Or Not
Last November, the European Commission's AI Office launched something most enterprise AI teams still haven't noticed: an AI Act Whistleblower Tool. Anonymous. Encrypted. Available in any EU language. A direct, secure reporting line to Brussels.
This week, I want to explain why this matters more than any single news story above.
What the Tool Actually Does
Any individual with a professional connection to an AI deployment—employees, contractors, suppliers, shareholders, former staff—can now report suspected AI Act violations directly to the EU AI Office. The reports are anonymised through certified encryption. Whistleblowers receive a secure inbox for follow-up communication, can upload supporting evidence, track the progress of their report, and respond to questions from investigators. All without ever revealing their identity.
The scope of reportable violations is broad: risks to health, safety, or fundamental rights. Manipulation or discrimination. Non-compliance with transparency requirements. Use of prohibited AI practices. Essentially, any breach of the AI Act as written.
The Protection Gap
Here's the catch—and the opportunity. The tool is live now, but the full legal protection framework doesn't activate until August 2, 2026. That's when the EU Whistleblower Directive officially extends to cover AI Act violation reports, providing legal protection against employer retaliation.
Until August, whistleblowers rely on the tool's technical confidentiality—the encryption, anonymisation, and secure communication infrastructure. They're not yet protected by law if their identity is somehow discovered. Some protections exist under adjacent frameworks—product safety, consumer protection, data privacy directives—but the explicit AI Act coverage isn't live.
This creates a five-month window where the reporting mechanism exists but the full legal shield doesn't. The Commission is, in effect, building the enforcement infrastructure ahead of the protection framework. The tool is the foundation; the legal protections are the walls coming in August.
Why This Changes the Governance Calculus
For enterprise AI leaders, the whistleblower tool fundamentally alters the governance equation. Before November 2025, the enforcement model for the AI Act was top-down: regulators would examine, audit, and penalise. That model is slow, resource-constrained, and limited by what external auditors can observe.
The whistleblower tool adds a bottom-up enforcement layer. The people closest to your AI deployments—the engineers building the models, the analysts reviewing outputs, the contractors processing data—now have a direct, protected channel to report what they see. Your regulatory exposure is no longer limited to what an external audit catches. It extends to everything your own people know.
Consider what this means in the context of this week's data. Cyberhaven reports that 54% of enterprises have discovered shadow AI in their environments. One-third of employees access AI through personal accounts. 82% of the most-used GenAI tools are classified medium-to-critical risk. 39.7% of interactions involve sensitive data.
Now imagine any of those facts reaching the AI Office's encrypted inbox.
The Internal Channel Imperative
The strategic response isn't to fear the whistleblower tool. It's to make it unnecessary.
Organisations that build effective internal reporting channels for AI concerns—and actually respond to what's reported—create an environment where employees resolve issues internally rather than escalating to Brussels. The whistleblower tool becomes a backstop, not a first resort.
This is the same pattern that played out with GDPR. The organisations that built robust data protection practices and responsive internal complaint mechanisms avoided the worst enforcement outcomes. The ones that treated compliance as a paper exercise discovered that their own employees, frustrated by being ignored internally, became the source of regulatory complaints.
The AI Act whistleblower dynamic will follow the same trajectory. The question is which side of it your organisation lands on.
The Broader Pattern
Zoom out and this week's stories form a coherent picture:
Microsoft ships a dashboard that surfaces the shadow AI most enterprises can't see—building governance infrastructure whether enterprises activate it or not. Cyberhaven's data reveals that one-third of employees are already operating outside corporate AI controls—the governance gap is a lived reality, not a theoretical risk. Anthropic launches plugin marketplaces that will accelerate AI agent proliferation across enterprises—making the governance challenge more complex before most have solved the current one. And the EU's whistleblower tool gives the people inside those organisations a direct line to the regulator.
The governance infrastructure is being built—by vendors, by regulators, by standards bodies, by the employees inside your organisation who know what's actually happening with AI. The only question is whether your enterprise is building its own governance infrastructure at the same pace, or whether you'll discover the gap when someone else reports it for you.
What This Means for Your Next Board Conversation
Three questions to bring to the table:
First: do we have an internal reporting channel for AI-related concerns? Not a generic compliance hotline—a specific mechanism for employees to flag AI risks, bias incidents, data handling concerns, and potential regulatory violations. If you don't have one, your employees now have Brussels as a default option.
Second: do we know what's running? Microsoft's dashboard, Cyberhaven's data, and last week's CIO survey all point to the same problem. If you can't inventory your AI agents, assess their risk levels, and trace their data access, you can't govern them. And if you can't govern them, you can't comply.
Third: are we building governance as infrastructure, or treating it as a compliance project? The organisations that will navigate August 2026 successfully are the ones building governance into their AI stack—continuous monitoring, automated compliance checks, real-time visibility. Not the ones who plan to paper over it with documentation in July.
The whistleblower tool is the clearest signal yet: the EU isn't waiting for enterprises to be ready. The enforcement infrastructure is live. The protection framework arrives in August. The only variable is whether your governance infrastructure arrives first.
Next Steps
What to read now?
Enterprise AI & Agents
Governance & Risk
Infrastructure & Frontier AI
EU AI Act Implementation
That’s it for this week.
That's it for this week. The EU's whistleblower tool is the quiet story that will outlast every headline above. It shifts AI Act enforcement from a top-down regulatory exercise to a distributed accountability mechanism—one where the people closest to your AI deployments have a direct line to Brussels.
Microsoft is building governance dashboards. Cyberhaven is mapping the shadow AI your security team can't see. Anthropic is shipping plugin marketplaces that will scatter AI agents across every department. And the Commission has handed anyone with a professional connection to your AI systems an encrypted channel to report what they find.
The organisations that will thrive aren't the ones with the most advanced models. They're the ones where governance works well enough that the whistleblower tool gathers dust.
Build your internal channels. Inventory your AI estate. Make August 2026 a milestone, not a surprise.
Until next Thursday, João
OnAbout.AI delivers strategic AI analysis to enterprise technology leaders. European governance lens. Vendor-agnostic. Actionable.


