Every headline satisfies an opinion. Except ours.
Remember when the news was about what happened, not how to feel about it? 1440's Daily Digest is bringing that back. Every morning, they sift through 100+ sources to deliver a concise, unbiased briefing — no pundits, no paywalls, no politics. Just the facts, all in five minutes. For free.
Last weekend, the US military used an AI model in a live strike against Iran. The company that built it had explicitly refused to allow that kind of deployment. It got banned by the White House for drawing that line — and then posted its largest sign-up day in history.
This became a case study, more than a policy debate.
If you're a European enterprise leader with a hyperscaler AI contract, this week forced a question you can no longer defer: what are your AI vendor's red lines, and who enforces them when a government doesn't like the answer?
The answer, it turns out, is nobody — unless the vendor itself decides to.
TL;DR
The Anthropic-Pentagon rupture is the first live test of AI governance under wartime pressure. The US banned Anthropic's federal contracts after it refused unrestricted military access. US Central Command had used Claude for intelligence assessments, target identification, and battle scenario simulations during Operation Epic Fury. The market rewarded the stance — Claude hit #1 in the US App Store. European enterprises now have a concrete benchmark for evaluating vendor governance commitments.
AI infrastructure in the Gulf is now a kinetic war risk. Israeli and US strikes on Iran killed Supreme Leader Khamenei and senior IRGC commanders. Iran retaliated with 247 ballistic missiles and 230 drones targeting Dubai, Bahrain, and Saudi Arabia. Billions in regional AI investment — including Saudi Arabia's $100B Project Transcendence — carry a new, non-theoretical risk class.
Governance readiness is declining, not improving. Deloitte's State of AI 2026 finds only 30% of enterprises rate governance as highly prepared, talent readiness has dropped to 20% — below last year — and shadow AI is now the default condition in most organisations, with breaches costing $670,000 more than standard incidents due to delayed detection.
Europe is building its own answer — on multiple fronts. The €75M EURO-3C sovereign cloud and AI infrastructure project launched at MWC with 70+ organisations including Telefónica. NATO certified iPhone and iPad for classified use after Germany's BSI tested them. And Apple is paying Google $1B/year for Gemini inside Siri — creating a governance dependency chain from NATO-classified devices to Pentagon-negotiating AI vendors.
The AI arms race just got a price tag. Global AI capex is projected at $527 billion for 2026. OpenAI raised $110 billion at a $730B valuation. The infrastructure layer is consolidating fast — and the vendors building it are the same ones negotiating military contracts.
The Brief
1. The Anthropic-Pentagon Rupture: AI Governance Under Wartime Pressure
Two sources confirmed that US Central Command used Anthropic's Claude model during strikes on Iran over the weekend of February 28 — Operation Epic Fury — employing it for intelligence assessments, target identification, and simulating battle scenarios. This use occurred despite Anthropic explicitly pushing for guardrails preventing military applications involving mass surveillance and fully autonomous weapons systems. CEO Dario Amodei stated publicly that the company "cannot in good conscience" accept Department of Justice demands for unrestricted access.
The Pentagon wanted "all lawful purposes" access — a phrase it described as "a simple, common-sense request." Anthropic refused. The White House ordered a six-month phase-down of all federal Anthropic contracts. Defense Secretary Pete Hegseth designated Anthropic a "Supply-Chain Risk to National Security." Multiple agencies — Treasury, State Department, HHS — confirmed they would stop using Anthropic products.
The market's response: Claude hit #1 in the US App Store. Anthropic posted its largest sign-up day in history. Every subsequent day that week set a new all-time record.
Do now: This is your governance benchmark. Pull your AI vendor agreements and check whether acceptable use policies are aspirational ("our model should not be used for harm") or contractual ("we will terminate access if deployed for autonomous weapons systems"). The distinction just proved material. See Deep Dive for the full contract audit framework.
2. Middle East: AI Infrastructure Is Now a Kinetic War Risk
The US and Israeli strikes on Iran beginning February 28 killed Supreme Leader Khamenei along with IRGC commander Mohammad Pakpour, Defense Minister Amir Nasirzadeh, and other senior officials. Strikes targeted 24 provinces.
Iranian retaliation was massive and geographically dispersed: at least 247 ballistic missiles and 230 drones struck across the Gulf. Dubai International Airport sustained damage with four staff injured. Missiles targeted the US Fifth Fleet headquarters in Manama, Bahrain, triggering evacuations in the Juffair district. Saudi military infrastructure in Riyadh and the Eastern Province intercepted additional Iranian missile and drone attacks.
Billions in AI infrastructure investment — including Saudi Arabia's $100B Project Transcendence — now carry kinetic war as a formal risk factor.
Do now: If you're evaluating data centre locations in the Gulf, your risk model just changed categories. Geographic diversification of AI infrastructure is no longer about latency or regulatory arbitrage — it's about whether your training clusters survive a regional escalation. Add kinetic conflict scenarios to any Gulf-based AI infrastructure procurement assessments immediately.
Sources: Al Jazeera, USNI News, Washington Post
3. The Largest Cyberattack in History — Running in Parallel
Israel simultaneously launched what analysts describe as the largest cyberattack in history against Iran, contributing to a near-total internet blackout and disruption of government services. AI-enhanced phishing, malware, and ransomware spiked, with at least 128 confirmed cyber incidents in early 2026 concentrated on government, banking, and financial services.
Do now: The convergence of kinetic and cyber warfare is now operational reality. AI-powered offensive cyber capabilities are being deployed at nation-state scale. If your AI serving infrastructure shares geographic or network proximity with military targets, update your threat model. Your inference endpoints are on the same internet as nation-state offensive tools.
Sources: SecurityWeek, CloudSEK, The Register
4. Deloitte: Governance Readiness Is Getting Worse
Deloitte's State of AI 2026 report — surveying 3,235 senior leaders between August and September 2025 — delivers an uncomfortable finding: governance preparedness has declined year-over-year. Only 30% of enterprises rate governance as highly prepared. Technical infrastructure readiness sits at 43%. Talent readiness has fallen to just 20% — lower than last year. The gap between AI deployment velocity and governance maturity is widening, not closing.
Do now: If your board thinks AI governance is a problem you're solving, the data says otherwise. Most organisations are falling further behind, not catching up. The August 2026 EU AI Act enforcement date doesn't care about your readiness timeline. Bring the 30% / 20% numbers to your next board conversation — they reframe governance from "we're working on it" to "the industry is losing ground."
5. Shadow AI Is Now the Default Condition
JetStream Security, built by CrowdStrike and SentinelOne veterans, raised $34M this week (led by Redpoint Ventures) specifically targeting shadow AI governance. Separate research puts it in sharper terms: the average enterprise now has 1,200 unofficial AI applications, with 86% reporting no visibility into AI data flows. Shadow AI breaches cost $670,000 more than standard incidents due to delayed detection.
Shadow AI is no longer an edge case. It's the baseline.
Do now: Three numbers for your CISO conversation: 1,200 (average unofficial AI apps per enterprise), 86% (no visibility into AI data flows), $670K (breach cost premium from shadow AI). If your organisation can't inventory its AI estate, it can't govern it. And if it can't govern it, it can't comply with August 2026 obligations.
Sources: Help Net Security, Fortune, TechStartups
6. EU AI Act: High-Risk Enforcement Delayed, But August 2026 Stands
The EU's Digital Omnibus is proposing to push high-risk AI enforcement deadlines: Annex III systems to December 2, 2027, and Annex I systems to August 2, 2028 — citing insufficient standards, guidance, and tools for realistic compliance. However, the main August 2, 2026 full applicability date remains firm. Article 50 transparency obligations — requiring AI interaction disclosure, synthetic content labelling, and deepfake identification — become enforceable this summer. Governance rules and GPAI model obligations have been in effect since August 2025.
The Commission also opened the €75M EURO-3C project for federated sovereign infrastructure this week at MWC, with 70+ European entities including Telefónica building edge-cloud-AI infrastructure designed to reduce hyperscaler dependence.
Do now: Track two timelines simultaneously. The first: August 2, 2026 — full applicability including Article 50 transparency requirements. The second: the Digital Omnibus legislative process, which may provide extensions for specific high-risk sectors. Don't assume the Omnibus will cover you. Plan for August compliance; treat any extension as a bonus, not a baseline.
Sources: EU Commission EURO-3C, IAPP, Morrison Foerster
7. NATO Certifies iPhone and iPad for Classified Use — A European Security Win
In a historic first, NATO has approved iPhone and iPad to handle classified information at the "restricted" level — making them the first consumer devices ever to achieve this certification across the alliance. Germany's Federal Office for Information Security (BSI) conducted the rigorous testing. iOS 26 and iPadOS 26 are now in the NATO Information Assurance Product Catalogue.
The approval comes without requiring third-party add-on solutions — a first for any consumer mobile platform at NATO classification level. Key security features cited include Apple's encryption architecture, biometric authentication, Memory Integrity Enforcement, and Private Cloud Compute infrastructure.
Do now: European security testing (BSI) just validated a consumer platform at military classification grade. Two implications: (1) your mobile device management strategy now has NATO-grade validation to reference for board-level conversations, and (2) European institutional testing is setting the global security bar. But read the next story before celebrating — because the devices NATO just certified are about to run an AI model whose parent company is negotiating military contracts with the Pentagon.
Sources: Apple Newsroom, 9to5Mac, SecurityWeek
8. Apple Pays Google $1B/Year to Put Gemini Inside Siri — The Dependency Chain
Apple is paying Google approximately $1 billion annually — potentially $5 billion over the contract's life — to power a rebuilt Siri with Google's Gemini 2.5 Pro (1.2 trillion parameters). The deal, finalised in January 2026, represents an 8x increase in model complexity over Apple's existing 150-billion-parameter Apple Intelligence models. Gemini will handle Siri's summariser and planner functions, running on Apple's Private Cloud Compute servers — meaning no user data flows to Google.
Connect the dots from Brief #7. NATO-certified iPhones run Siri. Siri will be powered by Gemini. Google is negotiating military AI access with the Pentagon. The governance chain from classified NATO communication to consumer AI to military AI vendor is now a single supply chain.
Do now: The model layer is consolidating. Apple — a company built on vertical integration — is paying a competitor $1B/year for AI capability it cannot build fast enough. If Apple can't keep up with the model race, your enterprise almost certainly can't either. Map your AI vendor dependency chains — including the models behind the models. The vendor you evaluate isn't always the vendor doing the inference.
Sources: TechCrunch, CNBC, 9to5Mac
9. DeepSeek V4 Imminent — Timed as Geopolitical Signal
DeepSeek is set to release V4, a multimodal model handling image, video, and text with a trillion-parameter sparse mixture-of-experts architecture, optimised for Huawei and Cambricon chips and planned as open-source. The timing is deliberate: it coincides with China's "Two Sessions" parliamentary meetings starting March 4. China is treating model releases as geopolitical statements, not product launches.
Do now: The AI model landscape is bifurcating along geopolitical lines. DeepSeek V4 running on Chinese silicon represents a complete alternative stack — models, chips, and infrastructure — that operates outside US export controls. If your enterprise strategy assumes a single global model marketplace, you're planning for a world that no longer exists. Assess whether any of your AI vendor dependencies have exposure to Chinese model or chip supply chains — and whether that exposure is a risk or a diversification opportunity.
Sources: South China Morning Post, TechNode
10. OpenAI's $110B Round and the Infrastructure Arms Race
OpenAI raised $110 billion — $50B from Amazon, $30B each from NVIDIA and SoftBank — achieving a $730B pre-money valuation. Global hyperscaler AI capex is projected at $527 billion for 2026, up from $465 billion at Q3 2025. EU AI server spending sits at $47 billion — roughly 9% of the global total.
Do now: The infrastructure concentration risk is escalating. Three companies (Amazon, Microsoft, Google) account for the majority of global AI compute. European AI server spending at $47B is real but dwarfed by US hyperscaler investment. The EURO-3C project is necessary but modest against these numbers. The question for European enterprises isn't whether to use hyperscaler infrastructure — it's how to maintain governance leverage when your vendor is spending more on AI infrastructure than your country's GDP growth. Factor this into any sovereign AI business case you're building.
Sources: TechCrunch, Bloomberg, Goldman Sachs
11. AI Inference Is the Real Attack Surface
A recent enterprise security panel found nearly half of participants lack confidence their AI systems meet 2026 standards. "Harvest now, decrypt later" threats have overtaken model drift as the top digital trust risk among infrastructure leaders. Post-quantum AI security is moving from planning to procurement. OWASP's 2025 LLM Top 10 ranks prompt injection as the #1 threat to production AI systems.
Do now: Your inference pipeline — not your training data — is becoming the primary target. If you're not already evaluating post-quantum encryption for model serving infrastructure, you're behind the curve. The convergence of prompt injection attacks and AI-powered offensive cyber (see Brief #3) means your model serving endpoints face threats from both sides. Add inference security to your next CISO review cycle.
Builder Spotlight
Aikido Security — Shipping Autonomous Security While Everyone Debates Frameworks
Profiling teams building for the European AI reality.
The company: Aikido Security, Ghent, Belgium What they do: Autonomous security for AI-generated software Why now: While everyone debates AI governance frameworks, Aikido is shipping autonomous security agents that make software self-securing.
Belgium's Aikido Security raised $60M in its Series B led by Tom Stafford at DST Global, achieving a $1B valuation in January 2026 — making it the fastest European cybersecurity company to reach unicorn status. Founded in 2022 by CEO Willem Delbare (previously co-founded Teamleader CRM and Officient), its security software helps developers detect and address risk automatically, with customers including Revolut, SoundCloud, and Niantic.
The company achieved 5x revenue growth last year — against a plan of 3x — and employs 164 people. Last week, Aikido launched its continuous AI penetration testing solution, which autonomously validates and remediates vulnerabilities on every code release, 24/7. Their survey of 500 security leaders found that 76% deploy production changes weekly or faster, yet only 21% validate security on every release. That gap is exactly where shadow AI risk lives.
What makes Aikido distinctive isn't just the technology — it's the strategic positioning. Delbare has explicitly rejected the "Checkpoint Mafia" playbook of building one security feature, raising cash, and getting acquired by Palo Alto or Cisco. This is a European-built, developer-first approach to a problem that American incumbents have largely ignored: what happens when AI generates the code and nobody checks whether it's secure?
For enterprise teams dealing with the shadow AI problem — the 1,200 unofficial apps, the $670K breach premium, the 86% with no visibility into AI data flows — Aikido represents the other half of the equation: not just detecting unauthorised AI tools, but securing the code they produce.
Learn more: aikido.dev
Deep Dive
The Governance Clause as Geopolitical Firewall
The Paradox
Here is a sentence that would have sounded absurd eighteen months ago: the AI company that drew a red line in an actual war just had its best day ever.
That's the paradox at the centre of the Anthropic-Pentagon rupture, and it's the most important case study in AI governance your enterprise has seen since the EU AI Act passed. Not because of what it says about Anthropic, but because of what it reveals about the governance clauses buried in every AI vendor contract your organisation has signed.
What Actually Happened
US Central Command used Anthropic's Claude model during strikes on Iran over the weekend of February 28, employing it for intelligence assessments, target identification, and simulating battle scenarios. This use occurred despite the fact that Anthropic had explicitly pushed for guardrails preventing military applications involving mass surveillance and fully autonomous weapons systems. CEO Dario Amodei stated publicly that the company "cannot in good conscience" accept Department of Justice demands for unrestricted access.
The Pentagon's position was that it needed "all lawful purposes" access — a phrase it described as "a simple, common-sense request." Anthropic refused.
The White House response was swift and punitive. President Trump ordered a six-month phase-down of all federal Anthropic contracts. Defense Secretary Pete Hegseth designated Anthropic a "Supply-Chain Risk to National Security." Multiple agencies — Treasury, State Department, HHS — confirmed they would stop using Anthropic products. The message was clear: if you sell to the US government, you don't get to set conditions on how your technology is used.
Anthropic's response was quieter but equally clear: it held the line. And on Monday, the company announced its largest single day for new sign-ups in its history. Claude hit #1 in the US App Store. Every single day of the following week set a new all-time record.
Why This Matters for European Enterprises
For most European enterprise leaders, the instinct might be to file this under "US politics" and move on. That would be a mistake. Here's why.
Every enterprise AI deployment rests on a stack of contractual assumptions. You assume your vendor will honour data processing terms. You assume model behaviour will stay within the boundaries defined in your agreement. You assume your vendor's acceptable use policies are more than marketing copy.
What the Anthropic-Pentagon rupture proves is that those assumptions are now being tested under the most extreme conditions imaginable — wartime use by the world's most powerful military. And the outcome reveals a fundamental asymmetry: your vendor's governance commitments are only as strong as their willingness to lose their biggest customer to defend them.
Anthropic, in this case, demonstrated that willingness. But there is no structural mechanism that required them to. No regulator forced that decision. No contract clause compelled it. It was an organisational choice, made under extraordinary pressure, with billions of dollars at stake.
That should concern you. Not because Anthropic made the wrong choice — they arguably made exactly the right one — but because the entire system depends on individual corporate decisions rather than enforceable institutional safeguards.
The Google Test
The next domino is already in motion. Alphabet is in discussions with the Pentagon about deploying Gemini in classified environments. A public open letter — signed by approximately 236 Google employees and 65 from OpenAI — urged leadership to adopt the same red lines Anthropic drew. Jeff Dean, DeepMind's chief scientist, publicly supported the concern, writing on X that "mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression."
This is the real test. Google has a history with military AI contracts — Project Maven in 2018 ended with Google pulling out after employee protests. But the company has since rebuilt its defence and intelligence business through Google Public Sector, and the financial incentives are vastly larger now. Consider the irony: Google is simultaneously negotiating military AI access with the Pentagon while selling Gemini to Apple for $1B/year to power Siri — making it the AI engine behind both consumer convenience and potential military applications.
If Google accepts "all lawful purposes" terms that Anthropic refused, European enterprises will face a concrete vendor selection question: do you choose the AI provider that held its governance line under wartime pressure, or the one that didn't?
That's not a hypothetical. That's a procurement decision your team may need to make this quarter.
The Apple-NATO Paradox
There's a parallel story unfolding that deserves attention. The same week that AI governance commitments were being tested by war, NATO certified iPhone and iPad for classified use at the "restricted" level — validated by Germany's BSI. And Apple finalised a $1B/year deal to embed Google's Gemini inside Siri.
Connect the dots. NATO's classified communications could run on iPhones. Those iPhones run Siri. Siri is powered by Gemini. Gemini's parent company is negotiating military AI contracts with the Pentagon.
The governance chain from classified NATO communication to consumer AI to military AI vendor is now a single supply chain. European enterprise leaders need to think about these dependencies as a stack, not as isolated vendor relationships.
What Your Vendor Contract Actually Says
Pull your AI vendor agreements off the shelf. Look for three things.
First, look for explicit use-case exclusions. Most enterprise AI agreements include some version of acceptable use policies. But there's a meaningful difference between "our model should not be used for harm" — which is aspirational — and "we will terminate access if our model is deployed for autonomous weapons systems" — which is contractual. The Anthropic case shows this distinction matters.
Second, look for escalation and enforcement mechanisms. What happens when your vendor discovers their model is being used in ways that violate stated policies? Anthropic discovered unauthorised military use and refused to expand access. What does your vendor's contract say they'll do? More importantly, what leverage do they have once the model is deployed inside a customer's infrastructure?
Third, look for geopolitical contingency clauses. If your AI vendor is banned by a government — as Anthropic now is by the US federal government — what happens to your enterprise deployment? Does your contract address scenarios where your vendor loses access to key markets, talent pools, or compute resources as a result of a political decision? With OpenAI's $110B round giving it a $730B valuation and making it heavily dependent on Amazon, NVIDIA, and SoftBank, the concentration risk extends beyond individual vendors to their investors and infrastructure providers.
Most enterprise AI contracts don't address any of these scenarios adequately. The Anthropic-Pentagon rupture just made that gap visible.
The European Advantage — If You Build It
This is where the European competitive advantage framing stops being rhetorical and starts being operational.
The EU AI Act — for all the complaints about its complexity — establishes exactly the kind of enforceable institutional framework that the Anthropic case proves is missing in the US. When a European enterprise deploys an AI system, it does so within a regulatory environment that defines red lines, assigns accountability, and creates consequences for violations. Article 50 transparency obligations becoming enforceable this August aren't bureaucratic overhead. They're the governance infrastructure that makes vendor commitments enforceable rather than aspirational.
The €75M EURO-3C project, announced this week at MWC, takes this further. More than 70 organisations, including Telefónica, are building federated, sovereign cloud and AI infrastructure specifically designed to reduce European dependence on US and Chinese hyperscalers. The explicit emphasis on agentic AI within sovereign infrastructure is not a coincidence — it's a direct response to scenarios exactly like the Anthropic-Pentagon rupture. At $47B in European AI server spending, the scale is real — but it's 9% of the $527B global total. Sovereignty requires not just regulation but compute.
And the academic evidence is catching up. A paper published just last week demonstrates empirically that sovereign AI-based public services are technically feasible and economically sustainable with modest computational resources. Another reframes sovereignty as a layered engineering property — not a checkbox — arguing that real sovereignty emerges from the ability to observe, control, and optimize physical infrastructure under real operating conditions.
The argument for European AI sovereignty used to be defensive: we need our own infrastructure because we can't trust others. The Anthropic-Pentagon rupture transforms that argument into something stronger: we need our own infrastructure because governance commitments without institutional enforcement are just marketing.
What To Do This Week
For CTOs and SVPs of Technology: Audit your AI vendor contracts for explicit use-case exclusions, enforcement mechanisms, and geopolitical contingency provisions. Map the full dependency chain — including your vendor's investors, infrastructure providers, and government relationships. If those sections are vague or missing, schedule a conversation with your vendor's governance team — and document what they tell you.
For CISOs: Assess your organisation's exposure to scenarios where a major AI vendor loses government market access. Map your critical AI dependencies and identify which ones carry single-vendor geopolitical risk. Quantify your shadow AI footprint — the 1,200-app average and $670K breach premium should frame the conversation with your board.
For Heads of Procurement: Begin requiring explicit governance red-line documentation as part of AI vendor evaluation criteria. The Anthropic-Pentagon case gives you a concrete benchmark to reference. Add supply-chain governance mapping to your vendor evaluation — if your AI vendor's AI is powered by another vendor (like Apple using Gemini), you need to understand the full chain.
The governance clause in your AI vendor contract is no longer a compliance formality. It's your geopolitical firewall. Make sure it's built to hold.
Next Steps
What to read now?
Geopolitics & Governance
Infrastructure & Sovereignty
Security & Shadow AI
Devices, Models & Dependencies
That’s it for this week.
The Anthropic-Pentagon rupture did something no white paper or regulatory framework has managed: it made AI governance tangible. Not as a compliance exercise. Not as a risk register line item. As a real-time decision with billions of dollars, geopolitical pressure, and actual military operations attached.
The market's response — rewarding the company that held its line — tells you something important about where enterprise buyers are heading. Governance isn't a cost centre. It's a competitive signal.
Meanwhile, the dependency chains multiply. NATO certifies iPhones for classified use. Those iPhones will run Gemini-powered Siri. Gemini's parent company is negotiating military AI contracts with the Pentagon. Apple is paying $1B/year for model access it can't build fast enough. OpenAI raises $110B from three investors. DeepSeek builds a complete alternative stack on Chinese silicon. And $527B in global AI capex flows overwhelmingly to three US hyperscalers while Europe invests $47B.
The organisations that will navigate this aren't the ones with the most advanced models. They're the ones whose governance infrastructure is built to hold when tested — by regulators, by wars, by supply-chain dependencies they didn't see coming.
Build yours before someone else tests it for you.
Until next Thursday, João
OnAbout.AI delivers strategic AI analysis to enterprise technology leaders. European governance lens. Vendor-agnostic. Actionable.



