The Pentagon is close to designating Anthropic a "supply chain risk"—a label normally reserved for foreign adversaries—because the company won't let Claude be used for mass surveillance or autonomous weapons. OpenAI, Google, and xAI already removed their safeguards. The same week, 74% of CIOs admitted they regret at least one major AI vendor decision from the past 18 months, 82% say AI agents are being built faster than IT can govern them, and the SEC quietly replaced crypto with AI governance as its top examination priority for the first time since 2018.

Three different institutions—military, enterprise, regulatory—all arriving at the same conclusion: governance isn't a compliance checkbox. It's the thing that determines whether AI deployments survive contact with reality.

TL;DR
  • Pentagon threatens to blacklist Anthropic over AI safeguards: Hegseth close to designating Claude a "supply chain risk"—a penalty normally reserved for foreign adversaries—because Anthropic won't remove guardrails on mass surveillance and autonomous weapons. $200M contract at stake; Claude is the only AI model in classified military systems

  • GenAI.mil crosses 1.1M military users; ChatGPT added: Five of six U.S. military branches now use the Pentagon's AI platform as primary tool. OpenAI joins Google and xAI on the platform—Anthropic's Claude remains absent from unclassified deployment

  • 74% of CIOs regret AI vendor decisions: Dataiku/Harris Poll survey of 600 CIOs reveals 71% face mid-2026 deadline to prove AI value or lose budgets. 82% say AI agents outpace governance. Only 25% have real-time visibility into production AI agents

  • "RAMmageddon" — AI memory chip shortage hits enterprise costs: DRAM prices up 75–90% in a single quarter. AI datacenters consuming 70% of global high-end memory. PC prices up 15–20%. Micron calls it "unprecedented"

  • SEC drops crypto, elevates AI governance as top 2026 exam priority: First time since 2018 crypto is absent from SEC priorities. AI governance, "AI washing" scrutiny, and cybersecurity now dominate examination focus

  • International AI Safety Report 2026 published: Led by Yoshua Bengio, authored by 100+ experts backed by 30+ countries—the largest global AI safety collaboration to date. Key finding: most risk management remains voluntary

Different by design.

There’s a moment when you open the news and it already feels like work. That’s not how staying informed should feel.

Morning Brew keeps millions of readers hooked by turning the most important business, tech, and finance stories into smart, quick reads that actually hold your attention. No endless walls of text. No jargon. Just snappy, informative writing that leaves you wanting more.

Each edition is designed to fit into your mornings without slowing you down. That’s why people don’t just open it — they finish it. And finally enjoy reading the news.

The Brief

1. Pentagon Threatens to Blacklist Anthropic Over AI Safeguards

Defense Secretary Pete Hegseth is "close" to designating Anthropic a "supply chain risk"—a penalty normally reserved for foreign adversaries like Chinese tech firms. The consequence: every company doing business with the Pentagon would have to certify they don't use Claude in their own workflows. Given that eight of the ten biggest U.S. companies use Claude, the collateral damage would be enormous.

The core dispute: Anthropic is willing to loosen its terms for military use, but draws two lines—no mass surveillance of Americans and no autonomous weapons without human involvement. The Pentagon insists on unrestricted use for "all lawful purposes." OpenAI, Google, and xAI have already agreed to remove safeguards for unclassified military systems. Anthropic is the only holdout.

The complication: Claude is the only AI model currently available in the military's classified systems. A senior administration official acknowledged that competing models "are just behind" for specialised government applications. The $200M contract at stake is part of an $800M pool split four ways among OpenAI, Anthropic, Google, and xAI.

The flashpoint was the Maduro raid in January. An Anthropic executive contacted Palantir to ask whether Claude had been used in the operation—a query Pentagon officials interpreted as the company trying to police military operations.

Constellation Research analyst Holger Mueller captured it: this standoff "will decide if AI companies have the power to limit how their tools and products are used, or if buyers can force the issue."

Do now: This isn't just a U.S. defence story. If Anthropic is designated a supply chain risk, European enterprises using Claude in any Pentagon-adjacent supply chain face immediate compliance questions. Audit your defence sector exposure. More broadly, watch how this resolves—it sets precedent for whether AI vendors retain governance authority over their own products, or whether buyers dictate terms regardless of ethical constraints.

2. GenAI.mil Crosses 1.1 Million Military Users — ChatGPT Joins the Platform

The Pentagon's enterprise AI platform, GenAI.mil, now claims 1.1 million unique users across the U.S. military—roughly half of all active-duty personnel. Five of six military branches (Army, Air Force, Space Force, Marine Corps, Navy) have designated it their primary AI tool. The Coast Guard remains the holdout, likely due to its Department of Homeland Security reporting structure.

On February 9, OpenAI's ChatGPT was added as the third model on the platform, joining Google's Gemini and xAI's Grok. Notably absent: Anthropic's Claude, the model at the centre of the supply chain risk standoff. The platform launched just two months ago and has scaled faster than any enterprise AI deployment in history—while Fortune 500 companies struggle to get 10% employee adoption, the Pentagon onboarded 1.1 million users in six months.

The scale is impressive but the use cases remain basic: memos, PowerPoints, award packages. Some service members question whether the platform can move beyond administrative automation to become a genuine military capability. The real test comes when governance questions—exactly the ones Anthropic is raising—move from policy debates to operational consequences.

All models on GenAI.mil are modified versions approved for sensitive but unclassified data. Data processed remains isolated from commercial training pipelines.

Do now: The Pentagon's 1.1M-user deployment is the largest enterprise AI rollout on record. Study the adoption mechanics, not just the controversy. Their approach—platform-level deployment with multi-model optionality—is the architecture pattern your own enterprise AI strategy should be evaluating. The governance gap (scaling deployment while governance lags) is a preview of what every large organisation will face.

3. The CIO ROI Reckoning: 74% Regret, 82% Can't Govern

Dataiku and Harris Poll surveyed 600 CIOs from companies with $500M+ revenue across the U.S., UK, France, Germany, UAE, Japan, South Korea, and Singapore. The findings are a governance indictment.

The headline: 71% believe their AI budget will face cuts or freezes if targets aren't met by mid-2026. Only 40% can directly link half or more of their AI initiatives to measurable cost savings or revenue. The clock is ticking and the measurement infrastructure doesn't exist.

The vendor problem: 74% regret at least one major AI vendor or platform decision from the past 18 months. 62% say their CEO has directly questioned those decisions. This isn't buyer's remorse—it's strategic misalignment between what vendors sold and what organisations needed.

The governance gap: 82% say AI agents are being built faster than IT can govern them. Only 25% have real-time visibility into all AI agents running in production. 54% have discovered shadow AI in their environments. 85% say traceability or explainability gaps have already delayed or stopped AI projects from reaching production.

The existential layer: nearly 75% said their company would suffer major disruption if the "AI bubble" burst. 60% say their own job is on the line. 57% think an AI collapse could end their organisations entirely.

Do now: If you're a CIO reading this, the mid-2026 deadline is real. Three actions: (1) establish AI agent registries—you can't govern what you can't see, and 75% of your peers can't; (2) build traceability infrastructure before it blocks your next production deployment; (3) renegotiate vendor contracts with governance clauses—74% vendor regret means the leverage is yours. Vendors need renewals more than you need their specific model.

4. "RAMmageddon": AI's Memory Chip Shortage Hits Enterprise Budgets

Bloomberg's latest reporting (Feb 15) confirms what enterprise procurement teams are already feeling: DRAM prices surged 75–90% in a single quarter, and the shortage is structural, not cyclical. The cause is straightforward—Samsung, SK Hynix, and Micron have diverted manufacturing capacity toward HBM (High Bandwidth Memory) chips used in AI accelerators, leaving conventional DRAM supply constrained.

The numbers: AI datacenters will consume 70% of global high-end DRAM production in 2026. Micron's CEO explained the trade-off: producing one bit of HBM requires forgoing three bits of conventional memory. Meta, Microsoft, Amazon, and Alphabet are spending an estimated $650 billion on data centers in 2026—up from $360 billion last year and $217 billion in 2024.

The enterprise impact is direct. PC prices are rising 15–20%. Server costs are being passed through to cloud customers. Micron has exited consumer branding entirely to focus on AI memory. Tesla is considering building its own memory fabrication plant.

The timeline for relief: Micron's new fabs in Idaho start producing in 2027–2028, with a New York facility expected by 2030. TrendForce estimates no meaningful supply improvement until 2028.

Do now: Factor memory cost inflation into your 2026–2027 infrastructure budgets. If you're planning hardware refreshes or on-premise AI deployments, accelerate procurement—prices are rising, not stabilising. For cloud-dependent strategies, expect sovereign and private cloud pricing to increase as providers pass through memory costs. This is a multi-year constraint, not a quarterly blip.

5. SEC Drops Crypto, Elevates AI Governance as Top 2026 Exam Priority

For the first time since 2018, cryptocurrency is absent from the SEC's annual examination priorities. What replaced it: AI governance, "AI washing" scrutiny, and cybersecurity as operational risk.

The shift is significant. Two years ago, the SEC treated AI as an emerging fintech curiosity. In 2026, it's classified as operational risk—linked to cybersecurity, disclosures, and the use of automated systems for critical business functions. Examiners will scrutinise whether firms' AI-related disclosures, supervisory frameworks, and controls align with actual practices.

The practical implications: firms using AI for investment recommendations, risk assessments, or client-facing services face examination on whether their representations are accurate and whether AI-driven recommendations meet regulatory expectations. "AI washing"—overstating AI capabilities in marketing or filings—is now an explicit examination target.

For European enterprises, this matters because SEC enforcement sets de facto global standards for financial services. If the SEC is examining AI governance practices, European financial regulators will follow. The EU AI Act's high-risk classification for AI in financial services aligns directly with the SEC's new examination focus.

Do now: If you operate in financial services—or supply AI tools to financial services firms—the SEC's shift from crypto to AI governance signals where enforcement resources are moving. Audit your AI-related disclosures for accuracy. Ensure your AI governance framework can withstand regulatory examination, not just internal review. The gap between marketing claims and operational reality is now an explicit regulatory target.

6. International AI Safety Report 2026: 100+ Experts, 30+ Countries, One Warning

The second International AI Safety Report, published this month, represents the largest global collaboration on AI safety to date. Led by Turing Award winner Yoshua Bengio and authored by over 100 AI experts, the report is backed by more than 30 countries and international organisations.

The central finding: most AI risk management initiatives remain voluntary, but a growing number of jurisdictions are beginning to formalise practices into legal requirements. The gap between voluntary frameworks and mandatory enforcement is where enterprise risk concentrates.

The report arrives at a moment when voluntary governance is being stress-tested from every direction—the Pentagon pressuring Anthropic to drop safeguards, the EU enforcing platform-level AI access rights, and CIOs discovering they can't see 75% of the AI agents running in their own environments.

For European enterprises, the report reinforces the trajectory: governance frameworks that are optional today become mandatory tomorrow. The organisations building governance infrastructure now aren't over-investing—they're front-running regulation that the report makes clear is coming across jurisdictions.

Do now: Read the report's executive summary. Use it as a benchmark for your own AI governance maturity. The report's framework for risk categorisation aligns with the EU AI Act's high-risk classification—if you're building compliance for August 2026 enforcement, the International AI Safety Report provides the global context that auditors and regulators will reference.

Builder Spotlight

Modulos AG
Governance as a Platform

Profiling teams building for the European AI reality.

When 82% of CIOs say AI agents are being built faster than IT can govern them, and only 25% have real-time visibility into what's running in production, the question isn't whether governance tooling is needed. It's whether anyone has built it properly.

Zurich-based Modulos AG has been working on this problem since 2018—years before the EU AI Act existed, and long before "AI governance" became a boardroom talking point. Founded by ETH Zurich researchers, the company built an AI governance platform that connects regulatory frameworks, compliance requirements, and operational controls into a unified system with human-in-the-loop AI agents that monitor, adapt, and flag misalignment in real time.

In July 2025, Modulos became the first AI governance platform to achieve ISO/IEC 42001 product conformity—the first auditable international standard for AI management systems—assessed by Swiss auditor CertX. The certification matters because ISO 42001 is becoming the baseline that enterprise buyers and regulators reference when evaluating AI governance maturity. Their research shows that up to 50% of ISO 42001 controls can be reused when transitioning to EU AI Act compliance, cutting the implementation path roughly in half for organisations that start with the standard.

The company has positioned itself inside the regulatory apparatus, not adjacent to it. Modulos actively participates in the EU AI Office, the CEN/CENELEC European Standards Body shaping AI Act technical standards, and NIST's AI Safety Institute. Most recently, they joined the Spanish government's high-risk AI sandbox programme—one of the first practical testing environments for AI Act compliance.

In July 2025, Modulos closed a CHF 8.7M pre-Series A round to scale the platform ahead of the EU AI Act's August 2026 enforcement deadline. The timing is deliberate: with high-risk system enforcement months away and the Commission's own implementation guidance delayed, enterprises need governance infrastructure that works against evolving specifications—not static compliance checklists.

The broader signal: governance is moving from a consulting engagement to a platform category. Just as security became a product (firewalls, SIEM, zero trust), AI governance is becoming infrastructure. When 85% of CIOs say traceability gaps have already blocked production deployments, the market is calling for tooling, not more frameworks.

Modulos isn't the only player—Holistic AI (London), Credo AI (US), and others are building in the same space. But the European origin, standards-body engagement, and ISO 42001 first-mover position make them a useful signal of where enterprise AI governance is heading: from PowerPoint to production.

Deep Dive

When Governance Becomes the Product

This week, three governance collisions happened simultaneously. They look like separate stories. They're the same story.

The Pentagon Collision

The U.S. Department of Defense is threatening to blacklist the only AI company that maintains ethical guardrails. Anthropic's position—no mass surveillance, no autonomous weapons without human control—is being treated as a defect rather than a feature. The penalty being considered—"supply chain risk" designation—would force every Pentagon contractor to certify they don't use Claude, despite eight of the ten largest U.S. companies relying on it.

The strategic question isn't whether Anthropic caves. It's what the standoff reveals about the AI vendor landscape. OpenAI, Google, and xAI removed their safeguards. Anthropic held the line. The Pentagon's response: threaten to destroy the holdout's government business.

For enterprise buyers, this is a governance signal at maximum clarity. When your AI vendor faces pressure to remove safeguards, will they protect your deployment's integrity or capitulate? The Pentagon is stress-testing this question on your behalf.

The Platform Collision

Last week, the European Commission charged Meta with abuse of dominant position for blocking third-party AI assistants from WhatsApp. This wasn't a regulatory formality—it was the second time since 2003 that the Commission has moved to impose interim measures in an antitrust case.

Meta's calculus was simple: lock 3 billion WhatsApp users into Meta AI by banning ChatGPT, Perplexity, and every other AI assistant from the platform. The Commission's response: that's not your decision to make. Platform dominance doesn't grant the right to control AI distribution.

The governance parallel to the Pentagon story is direct. In both cases, a powerful institution (the Pentagon, Meta) attempted to dictate how AI is deployed—and ran into governance constraints. The Pentagon hit Anthropic's ethical guardrails. Meta hit European competition law. Different arenas, same pattern: governance determines who controls AI deployment, and the controllers don't always win.

The Enterprise Collision

The Dataiku/Harris Poll data completes the picture from inside the enterprise. 82% of CIOs say AI agents are being built faster than IT can govern them. 74% regret vendor decisions. 85% say traceability gaps have blocked production deployments.

Connect the dots: the Pentagon can't govern how its AI vendor's tools are used in operations. Meta can't govern competitor access to its platform without regulatory consequences. And enterprises can't govern the AI agents proliferating across their own environments.

Governance isn't failing at one level. It's failing at every level simultaneously.

The Convergence

Here's the thesis: governance is becoming the primary differentiator between AI vendors, between platforms, and between enterprises that scale and those that stall.

Consider the vendor layer. The Pentagon standoff will force every enterprise to evaluate their AI providers on governance posture, not just capability benchmarks. Anthropic's willingness to lose a $200M contract rather than enable mass surveillance is a governance position that some enterprise buyers will view as a feature—especially in regulated European industries where the EU AI Act demands exactly this kind of ethical boundary.

Consider the platform layer. The Commission's enforcement against Meta establishes that platform governance—who gets access, under what terms—is subject to regulatory oversight. Every enterprise deploying AI through platform-dependent channels (Microsoft 365, Google Workspace, Salesforce) should now expect similar scrutiny of how those platforms govern AI access.

Consider the enterprise layer. The CIO data reveals the internal governance deficit. When only 25% of enterprises have visibility into their production AI agents, and 54% have discovered shadow AI they didn't authorise, the governance problem isn't regulatory—it's operational. You can't comply with the EU AI Act if you don't know what's running in your own environment.

The European Position

The EU AI Act's high-risk enforcement hits August 2026. But the Digital Omnibus proposal may push some deadlines to late 2027 if standards aren't ready. The Commission missed its own February 2 deadline for Article 6 guidelines. So you have enforcement uncertainty at the exact moment governance has proven essential.

The supporting data paints the urgency:

54% of IT leaders now rank AI governance as a core concern—doubled from 29% in 2024. Three in five organisations have suffered AI-related losses exceeding $1 million. The SEC's 2026 exam priorities place AI governance above crypto for the first time. And Yoshua Bengio's International AI Safety Report—backed by 30+ countries—confirms that voluntary governance is transitioning to mandatory requirements across jurisdictions.

European enterprises face a paradox: the regulatory framework demands governance maturity that most organisations haven't built, but the penalty for building it late is worse than building it early. The 85% of CIOs who say traceability gaps have already blocked production deployments aren't facing a future problem. They're living the consequences now.

What This Means for Your Next Board Conversation

The question has shifted. It's no longer "do we need AI governance?" It's "is our governance infrastructure a competitive asset or a bottleneck?"

The Pentagon story shows what happens when governance is absent at the deployment level. The Meta story shows what happens when platform governance meets regulatory enforcement. The CIO data shows that 74% vendor regret is the enterprise version of the same failure.

The organisations that will scale AI in 2026 and beyond are the ones treating governance not as compliance overhead, but as product infrastructure—as fundamental to their AI stack as compute, models, and data pipelines.

The 25% of CIOs with real-time visibility into their production AI agents? They're not over-investing in governance. They're the only ones who can actually prove compliance when August arrives.

Everyone else is building on a foundation they can't see, can't audit, and can't defend.

That’s it for this week.

That's it for this week. Governance went from compliance exercise to competitive differentiator in the time it took the Pentagon to threaten its most capable AI vendor. The Commission is enforcing platform governance through antitrust law. The SEC has elevated AI governance above crypto. And 82% of CIOs admit they can't govern what they've already deployed.

The organisations that treat governance as product infrastructure—built into their AI stack, not bolted on after deployment—will be the ones that scale, survive audits, and retain the trust of customers and regulators. Everyone else is one incident away from discovering what ungoverned AI actually costs.

The Pentagon standoff will resolve. The governance question won't. It's the permanent condition of deploying AI at scale, and the sooner your organisation builds for that reality, the further ahead you'll be when everyone else starts.

Until next Thursday, João

OnAbout.AI delivers strategic AI analysis to enterprise technology leaders. European governance lens. Vendor-agnostic. Actionable.

Keep Reading