Ten days after the first physical attack on hyperscaler infrastructure in history, the cloud industry is still processing what it means. AWS ME-CENTRAL-1 lost two of three availability zones simultaneously. 38 services went down in the UAE, 46 in Bahrain, and cascading failures reached US-EAST-1. Meanwhile, Microsoft has $15.2 billion committed to UAE data centers and Google runs a region in Saudi Arabia. They weren't hit — but they're standing in the same blast radius.

This edition breaks down what the March 1 incident reveals about how AWS, Azure, and Google Cloud actually compare on resilience — not in marketing decks, but under fire.

TL;DR
  • AWS was the only hyperscaler directly hit in the March 1 drone strikes, but Azure and Google Cloud have comparable physical exposure across the Gulf

  • The "multi-AZ" resilience model all three providers sell was designed for power failures and network partitions — not coordinated kinetic attacks on multiple facilities within the same metro area

  • Microsoft's own legal director admitted no sovereign cloud architecture can shield EU data from US government access under the CLOUD Act

  • European enterprises running critical workloads need to reassess not just where their data sits, but who owns the infrastructure and what geopolitical risks surround it

  • The organizations that act now — mapping dependencies, stress-testing failover, and diversifying providers — will be the ones still operating when the next incident hits

The Brief

1. Only AWS Burned — But the Others Were Standing in the Same Room

Ten days after Iranian drones struck three AWS data centers across the UAE and Bahrain, there is a tempting narrative forming: AWS failed, Azure and Google did not. The reality is less comforting. Microsoft operates data center regions in Qatar, the UAE, and Israel, with $15.2 billion committed to UAE infrastructure through 2029. Google Cloud runs a region in Dammam, Saudi Arabia. Oracle has facilities in the UAE. None of these were targeted on March 1st — but none of them are meaningfully harder to hit.

The distinction between "was attacked" and "is resilient" matters enormously. Azure and Google Cloud status pages showed no outages related to the conflict, which tells us they were not struck. It tells us nothing about what would have happened if they were. Every hyperscaler in the Gulf shares the same fundamental vulnerability: concentrated physical infrastructure within range of state-level adversaries who have now demonstrated both the capability and the willingness to strike civilian technology assets.

2. The Multi-AZ Promise Has a Ceiling

AWS's resilience model is built on availability zones — physically separate data centers within a region, designed so that losing one zone does not take down your application. The March 1 attack knocked out two of ME-CENTRAL-1's three availability zones simultaneously. The model held for single-zone failures. It was never designed for coordinated strikes across a metropolitan area.

Azure and Google Cloud use architecturally similar approaches. Azure's availability zones are physically separate data centers within regions. Google Cloud's zones follow the same pattern. All three providers engineer for independent failure domains — but "independent" assumes failures are uncorrelated. A drone swarm does not respect failure domain boundaries. This is not an AWS-specific weakness. It is a shared assumption baked into how all three hyperscalers architect regional resilience. The question European enterprises should be asking is not "which provider is safest?" but rather "what class of failures are we actually protected against, and what falls outside that envelope?"

Do now: Review your disaster recovery runbooks. Identify any workload where failover depends on multiple AZs within the same region and no cross-region backup exists.

Sources: Cybersecurity News — AWS Middle East region hit by drone strikes, 109 services disrupted · Data Center Knowledge — AWS Middle East outage after data centers hit by drone strikes · Tom's Hardware — Iranian drone strikes hit three AWS data centers

3. The Submarine Cable Chokepoint No One Mapped

The drone strikes are the headline, but the deeper risk is beneath the surface. Seventeen submarine cables run through the Red Sea, carrying the majority of data traffic between Europe, Asia, and Africa. Rest of World Both the Red Sea and the Strait of Hormuz are now effectively closed to commercial traffic simultaneously — a situation that has never occurred before in history. Capacity Cable repair ships can't safely reach either passage while hostilities continue. Repairs on cables severed near Jeddah last September have been halted entirely, worsening existing disruptions. Observer Voice If your Europe-to-Asia data path runs through these waters — and most do — you have a latency and availability problem that no cloud provider can fix.

Do now: Ask your network team to trace your Europe-Asia data paths. If they transit the Red Sea or Hormuz, activate contingency routing through the Cape of Good Hope and accept the latency cost now rather than discovering it during an outage.

Sources: Rest of World — U.S.-Iran war threatens Gulf AI infrastructure · Capacity — Iran-US war puts subsea cable network on a knife-edge

4. Gulf States Are Building Oil Pipeline Logic for Data Six competing projects backed by Saudi Arabia, Qatar, and the UAE are racing to build overland fiber-optic corridors to Europe through Syria, Iraq, and East Africa Rest of World — essentially replicating the bypass strategy they built decades ago for oil exports. The scramble accelerated after the AWS strikes. The problem: geopolitical rivalry between the Gulf states themselves may fragment the effort, and routing fiber through Syria and Iraq introduces its own stability questions.

Do now: Track this development. If your organization has significant Middle East or Asia-Pacific data flows, overland corridors may change your routing calculus within 12-18 months.

Sources: Rest of World — Gulf states race to build overland data cables to Europe

5. Your Cloud Provider Is Now a Military Target Iran's IRGC explicitly stated it targeted the Bahrain AWS facility because it hosts U.S. military workloads. Reports indicate the U.S. military used Anthropic's Claude — which runs on AWS — for intelligence assessments and target identification during the Iran strikes. Fortune The boundary between commercial and military cloud infrastructure has effectively vanished. As one defense analyst put it, civilian and commercial infrastructure are now primary targets in modern warfare precisely because they sit at the intersection of political and economic pressure. DefenseScoop

Do now: Add "provider military contract exposure" to your cloud risk assessment framework. If your hyperscaler hosts military workloads in the same region as your production environment, that is a risk factor your board should know about.

Sources: Fortune — Iranian drone attacks signal a new kind of war · DefenseScoop — Commercial data centers emerge as targets · CBS News — Anthropic's Claude AI being used in Iran war

6. Insurance Premiums Just Got a New Line Item Insurers are recalculating premiums to incorporate kinetic risk models for data center coverage. Aicerts News Uptime Institute analysts expect hardened roofs, blast walls, and spectrum jammers to become baseline facility specifications. New "security premium" requirements for physical hardening are projected to add 15-20% to the capital expenditure of new data center builds in the region. EnkiAI If you're negotiating cloud contracts with workloads in geopolitically exposed regions, expect pricing to reflect this reality within the next renewal cycle.

Do now: Flag this for your procurement team. Cloud pricing in Middle East regions will adjust. If you're mid-contract, review your SLA terms for force majeure and war exclusion clauses.

Sources: Insurance Journal — Drone strikes damage Amazon data centers · Enki AI — Data Center Risk 2026: War reshapes ME investment

7. DORA and NIS2 Just Got a Live Case Study European regulators wrote DORA and NIS2 to force operational resilience — concentration risk assessments, third-party provider oversight, cross-region failover testing. Transition periods have expired in 2026, and companies that qualify as essential or important entities now face full implementation requirements. Heydata The AWS March 1 incident is the exact scenario these regulations were designed to address. If your organization hasn't completed its DORA Article 28 third-party risk assessment on your cloud providers, the regulator now has a concrete precedent to point at.

Do now: If you're in financial services, pull your DORA ICT third-party risk register. Verify it includes geopolitical/kinetic risk as a scenario. If you're in any NIS2-covered sector, confirm your cloud provider resilience testing covers coordinated multi-facility failures.

Sources: Atos — The future of EU organizations with sovereign cloud · heyData — NIS2 vs. DORA: Differences, obligations and deadlines 2026

Here's how I use Attio to run my day.

Attio's AI handles my morning prep — surfacing insights from calls, updating records without manual entry, and answering pipeline questions in seconds. No searching, no switching tabs, no manual updates.

Builder Spotlight

STACKIT — The Sovereign Cloud Built by a Grocer

Profiling teams building for the European AI reality.

The company: STACKIT by Schwarz Digits, Germany What they do: European-owned sovereign cloud infrastructure with all data centers in Germany and Austria Why now: While hyperscalers race into geopolitically volatile regions, STACKIT is building the European alternative that enterprises actually need post-March 1.

Schwarz Digits is the tech arm of Schwarz Group — the parent company of Lidl and Kaufland. That a European retailer is building sovereign cloud infrastructure tells you everything about where the market is heading. STACKIT runs all data centers in Germany and Austria, fully GDPR-compliant, with no US jurisdictional exposure.

On March 5, STACKIT announced a strategic partnership with CrowdStrike to bring the Falcon cybersecurity platform to its sovereign cloud — AI-native threat detection with all telemetry and processing staying within European data centers. The partnership supports compliance with GDPR, the EU Cyber Resilience Act, and national regulatory standards. For enterprises evaluating sovereign alternatives after the AWS Middle East incident, this is exactly the kind of infrastructure-plus-security stack that was missing from the European market.

What makes STACKIT distinctive is the backing. Schwarz Group generates over €150 billion in annual revenue. This is not a startup burning venture capital — it is a European industrial conglomerate investing in digital sovereignty because its own retail operations demanded it. The cloud they built for themselves is now available to everyone.

For enterprise teams running the dependency audit we recommended in The Brief — STACKIT is one of the providers worth putting on the evaluation list, especially for regulated workloads where CLOUD Act exposure is a non-starter.

Learn more: stackit.com

Deep Dive

The Resilience Scorecard No One Publishes

The March 1 incident exposed something the cloud industry has always known but rarely discussed publicly: resilience architectures are optimized for the failures providers have already experienced, not the ones geopolitics is now making plausible.

Region Footprint: Bigger Is Not Necessarily Safer

AWS, Azure, and Google Cloud each operate between 36 and 60+ regions globally, with broadly comparable geographic distribution. On paper, the differences look marginal. In practice, what matters is not how many regions exist but where your data actually runs and how quickly you can move it.

AWS was the first hyperscaler to build Middle East regions and attracted the largest share of Gulf-based enterprise workloads. That first-mover advantage became a concentration risk. Azure's $4.6 billion UAE investment and Google Cloud's Saudi presence suggest they are building toward similar concentration. The pattern is clear: every hyperscaler is racing into geopolitically volatile regions because that is where the capital is flowing. Resilience planning has not kept pace with expansion ambitions.

So what? Do not assume that a provider with more regions is inherently more resilient for your workloads. Map your actual deployment footprint against geopolitical risk, not marketing materials.

Cross-Region Failover: The Gap Between Theory and Thursday at 3 AM

All three providers offer cross-region replication and failover capabilities. AWS has Route 53 health checks and Application Recovery Controller. Azure provides Traffic Manager and Site Recovery. Google Cloud offers Cloud DNS routing and cross-region load balancing. The tools exist. The question is whether your organization has actually configured, tested, and rehearsed them.

In the aftermath of March 1, organizations with active-active deployments across ME-CENTRAL-1 and a secondary region recovered within hours. Those relying on cold standby or manual failover procedures faced days of degradation. This pattern will repeat regardless of which provider is hit next. The differentiator is not the provider — it is whether you have invested in making cross-region failover a tested, automated reality rather than a slide in an architecture review deck.

So what? The best resilience architecture is the one you have actually tested under pressure. Schedule a failover drill this quarter. If your DR plan requires someone to wake up and run a manual procedure, it is not a plan — it is a hope.

The European Angle: Sovereignty Meets Geopolitics

For European enterprises, the March 1 incident adds a new dimension to the sovereignty conversation. AWS launched its European Sovereign Cloud in Brandenburg, Germany in January 2026 — a €7.8 billion investment in physically and logically separated infrastructure. Azure completed its EU Data Boundary in 2024. Both pitch these as answers to data residency concerns.

But sovereignty is not just about where data sits. Microsoft's own legal director told French lawmakers that Microsoft "cannot guarantee" EU data would be protected from US government access. No technical measure or contractual clause changes the CLOUD Act's reach. Now layer kinetic risk on top of jurisdictional risk: your data may be in Frankfurt, but your provider's global control plane, support operations, and engineering capacity just absorbed a major blow from a conflict six thousand kilometers away. The cascading failures from March 1 reached US-EAST-1 — the backbone region for much of AWS's global infrastructure.

Sixty-one percent of European CIOs say they want to increase use of local cloud providers. European alternatives like Hetzner, Scaleway, and STACKIT by Schwarz Digits offer genuine jurisdictional separation. They cannot match hyperscaler scale or service breadth, but for regulated workloads where sovereignty is non-negotiable, the trade-off is increasingly rational.

So what? Treat provider diversification as a risk control, not a cost optimization exercise. For workloads subject to DORA, NIS2, or the AI Act, evaluate whether a European-owned provider should handle at least your most sensitive data layers.

This Week in Tech

Anthropic Sues the Trump Administration

Anthropic filed two federal lawsuits against the Trump administration after the Pentagon designated the company a "supply chain risk" — effectively blacklisting Claude from all defense contractor work. The trigger: CEO Dario Amodei's refusal to allow Claude to be used for autonomous weapons or domestic surveillance. The lawsuits allege First Amendment violations and overreach of supply chain risk law. Hundreds of millions in revenue are at stake. Meanwhile, the #QuitGPT movement saw 2.5 million users abandon ChatGPT after OpenAI signed its own Pentagon deployment contract, pushing Claude to #1 on the U.S. App Store.

Why it matters: The AI industry is splitting along an ethical fault line. For European enterprises evaluating AI providers, the question of who controls model deployment policies — and under what political pressure — just became a procurement consideration, not a philosophical one.

Oracle Plans 20,000–30,000 Layoffs to Fund AI Infrastructure

Oracle is reportedly cutting up to 30,000 jobs to free $8–10 billion for AI data center expansion, driven by commitments including a $156 billion OpenAI deal requiring 3 million GPUs over five years. US banks have pulled back from financing, doubling Oracle's borrowing costs and stalling projects. The company is considering selling Cerner and plans $45–50 billion in debt and equity raises this year.

Why it matters: The AI infrastructure race is consuming companies from the inside. If your organization runs Oracle workloads, watch for service disruption signals as restructuring hits support and engineering teams.

Both Maritime Data Chokepoints Are Now Closed Simultaneously

For the first time in history, both the Strait of Hormuz and the Red Sea are effectively closed to commercial traffic at the same time. Iran declared Hormuz shut on March 3, while renewed Houthi attacks have made the Red Sea impassable. Seventeen submarine cables transit the Red Sea carrying the majority of Europe-Asia data traffic, and additional cable systems pass through Hormuz serving Gulf states. Repair ships that were already working on cables damaged last September can no longer safely operate in either passage. Gulf states have responded by financing six competing overland fiber routes to Europe through Syria, Iraq, and East Africa — replicating the bypass strategy they built for oil decades ago.

Why it matters: This is not a cloud provider issue — it is an internet topology issue. European enterprises with data flows to Asia, Africa, or the Middle East should expect increased latency and reduced redundancy until at least one chokepoint reopens. Network teams should be mapping alternative routes now, not after the next cable cut.

Next Steps

What to read now?

That’s it for this week.

he cloud industry just got its first live-fire resilience test. The organisations that treat March 1 as a wake-up call — mapping dependencies, stress-testing failover, and diversifying providers — will be the ones still operating smoothly when the next incident hits. The ones that file it under "unlikely to recur" will learn the same lesson twice.

If this landed in your inbox from a forward — subscribe here to get the full picture every week.

I'm running a "Cloud Resilience in a Contested World" workshop for enterprise architecture teams. Half-day, hands-on, built around exactly the scenarios we're living through. If your organisation is rethinking its DR posture after March 1, reply to this email — I'll share the details.

Until next Thursday, João

OnAbout.AI delivers strategic AI analysis to enterprise technology leaders. European governance lens. Vendor-agnostic. Actionable.

Keep Reading