This website uses cookies

Read our Privacy policy and Terms of use for more information.

On Wednesday evening, Microsoft, Alphabet, Amazon, and Meta reported earnings within minutes of each other. The numbers told the same story: AI demand is still outrunning capacity, and the infrastructure race is not slowing down. Azure and other cloud services grew 40%. Google Cloud grew 63%. AWS grew 28% to $37.6 billion. Meta raised its 2026 capex guidance to $125–145 billion. Across Big Tech, AI infrastructure spending is now running at more than $600 billion a year.

Earlier that day, Europe woke up to news that the AI Omnibus trilogue had failed after 12 hours of talks on Tuesday. Parliament and Council could not agree on whether AI embedded in medical devices, toys, connected cars, and industrial machinery should follow AI Act high-risk obligations or only existing sectoral product-safety law. Talks resume in May. The August 2, 2026 deadline is now uncomfortably close.

Two signals, same day. The capital is accelerating. The governance is stalling. And buried in an academic paper from UPenn and Boston University, there is a formal model explaining why neither side can stop — and what that means for every enterprise leader watching from Europe.

TL;DR
  • Hyperscaler Q1 2026 earnings landed Wednesday evening. Azure +40%, Google Cloud +63% ($20B, backlog over $460B), AWS +28% to $37.6B (fastest growth in 3+ years, AI run rate above $15B), Meta revenue +33% to $56.3B. AI infrastructure spending across Big Tech is now running at more than $600B a year. Microsoft flagged capacity constraints through at least June 2026.

  • The AI Omnibus trilogue collapsed on Tuesday, April 28 after 12 hours, with the failure reported on April 29. The sticking point: whether product-embedded high-risk AI (medical devices, toys, connected cars) follows AI Act rules or just sectoral law. Parliament's negotiator warned the carve-out would be "deregulatory rather than simplifying." Next round in May. If no deal by ~June, the original August 2, 2026 high-risk deadline holds.

  • A new paper from UPenn and Boston University formalises the AI displacement trap as a Prisoner's Dilemma. Each firm benefits from automating, but displaced workers are everyone's customers. Six policy interventions — including UBI and upskilling — fail to close the gap. Only a Pigouvian automation tax works. The EU AI Act has nothing to say about this. That is the governance blind spot this edition's Deep Dive unpacks.

  • Microsoft released its Agent Governance Toolkit earlier this month — seven open-source, MIT-licensed packages spanning Python, TypeScript, Rust, Go, and .NET for governing autonomous AI agents. Covers all 10 OWASP agentic AI risks. On April 28, Cequence shipped Agent Personas — infrastructure-level agent governance at the API gateway layer.

The Brief

More Than $600 Billion and Counting: What the Earnings Actually Say

The numbers, stripped of the investor-relations polish:

Microsoft (fiscal Q3 FY26): Revenue $82.9B (+18% YoY). Azure and other cloud services grew 40%. Intelligent Cloud revenue $34.7B (+30%). Microsoft Cloud revenue reached $54.5B (+29%). Commercial remaining performance obligation increased 99% to $627B. Microsoft said its AI business surpassed a $37B annual revenue run rate, up 123% year-over-year. Management guided Azure growth at ~37% for next quarter and flagged that cloud supply constraints will persist through at least June 2026.

Alphabet (Q1 2026): Revenue $109.9B (+22% YoY) — the company's highest quarterly growth rate since 2022. Google Cloud revenue hit $20.02B, up 63% year-over-year, beating estimates of $18.05B. Cloud backlog reached $460B. CEO Sundar Pichai reported Gemini Enterprise paid monthly active users grew 40% quarter-over-quarter.

Amazon (Q1 2026): Revenue $181.5B (beat estimate of $177.3B). AWS revenue $37.6B, up 28% — fastest growth in more than three years. AWS AI revenue run rate crossed $15B, the first time Amazon disclosed a hard number for AI-specific contribution. EPS $2.78 vs. expected $1.64. 2026 capex projected at $200B. Q2 guidance of $194–199B revenue, well above Street estimate of $188.9B.

Meta (Q1 2026): Revenue $56.3B (+33% YoY). EPS $10.44. Full-year capex guidance raised to $125–145B, up from $115–135B. Ad impressions grew 19%, average price per ad +12%. Stock fell in extended trading on the capex raise.

Do now: If your organisation runs AI workloads on any of these platforms, the capacity constraint story is the one that matters for planning. Azure supply is tight through June. Google Cloud's 63% growth means pricing pressure will follow. AWS just told you AI is a $15B run-rate business — that is the baseline for how aggressively they will compete for your workload. Refresh your cloud cost model this quarter, not next.

The Omnibus Collapses: August 2 Is Three Months Away

The AI Omnibus trilogue ran for 12 hours on Tuesday, April 28, and ended without agreement. The core dispute: the European Parliament and industry groups pushed for AI systems embedded in regulated consumer products — medical devices, toys, connected cars, machinery — to follow only existing sectoral safety regulations, effectively exempting them from AI Act high-risk obligations. The Council opposed this broad carve-out.

Parliament's lead negotiator, Michael McNamara, warned that shifting AI governance to sectoral laws could be "deregulatory rather than simplifying." He is right. The product safety directives these sectors already follow were written before AI was a component. Delegating AI governance to frameworks that have no concept of training data provenance, algorithmic bias, or automated decision-making is not simplification — it is abdication.

One area of agreement: both sides aligned on banning AI systems generating non-consensual intimate images, including CSAM. That consensus was not enough to unblock the exemption question.

The next round of talks is scheduled for May. The arithmetic is unforgiving: the Omnibus must be formally adopted and enter into force before August 2, 2026 — the date when original high-risk obligations activate. If the Omnibus fails to pass in time, the original 2024 timeline holds without amendment. Every organisation that planned around "the delay" is now planning around a maybe.

Do now: If your compliance programme assumed the December 2027 long-stop date from the Omnibus, build your Scenario A contingency this week: what happens if the original August 2, 2026 deadline holds? The two-scenario planning framework from our April 16 edition still applies — but the probability weighting just shifted.

Agent Governance Gets Its Own Stack

Microsoft released its Agent Governance Toolkit earlier this month: an open-source, MIT-licensed toolkit for runtime security governance of autonomous AI agents. Help Net Security describes it as a seven-package system spanning Python, TypeScript, Rust, Go, and .NET, covering all 10 OWASP agentic AI risks. The toolkit handles agent identity, permission scoping, audit trails, and tool-call-level access control.

Then on April 28, Cequence Security shipped Agent Personas in its AI Gateway — granular, infrastructure-level control over what AI agents can do, down to individual tool calls. Two vendors shipping agent governance tooling in the same month is a signal. The agentic AI wave that arrived in 2025 is now generating its own governance stack.

Why it matters: Article 14 of the AI Act requires human oversight of high-risk AI systems. When your "AI system" is an autonomous agent making tool calls, executing code, and interacting with external services, the human oversight model from 2024 does not hold. Microsoft's toolkit is the first vendor-neutral, open-source starting point for an agent-specific governance layer. If you are deploying agents in production, evaluate it before you build your own.

Deep Dive

The AI Layoff Trap — When Success Becomes the Systemic Risk

The hyperscaler earnings above tell one story: AI investment is accelerating. This Deep Dive tells the other story: what happens when it works.

There is a new paper from UPenn and Boston University that puts formal economics behind something most enterprise leaders sense but cannot articulate: AI layoffs are a collective action trap.

The paper — "The AI Layoff Trap" by Brett Hemenway Falk and Gerry Tsoukalas — builds a competitive task-based model and reaches a conclusion that should be uncomfortable reading in every boardroom reviewing those Q1 earnings tonight.

The Logic

A firm automates. It cuts costs. It wins margin. The displaced workers were also customers — theirs and everyone else's. Each layoff shaves aggregate demand across the entire market. But competitive pricing means each firm only absorbs a fraction of the demand damage it causes. So every firm keeps cutting.

This is a Prisoner's Dilemma running at industrial speed. Every CEO sees the cliff. None can brake first without losing to the one who doesn't.

The paper is not speculation. It is a formal model with equilibrium analysis. And the finding that matters most is what does not work.

Six Interventions That Fail

Falk and Tsoukalas test six policy interventions that are routinely proposed as solutions to AI displacement:

  1. Wage adjustments — the market does not self-correct because the externality is cross-firm

  2. Free entry — more firms entering the market accelerates the displacement, it does not absorb it

  3. Capital income taxes — redistributes revenue but does not change the automation incentive

  4. Worker equity participation — gives displaced workers a share of returns but does not restore their role as consumers

  5. Universal basic income — sustains demand but does not change the cost calculus driving automation decisions

  6. Upskilling programmes — helps individuals but does not address the aggregate demand destruction at market level

None of them close the gap. Each addresses a symptom — income, skills, redistribution — without pricing the externality at its source.

The One That Works

Only a Pigouvian automation tax — a tax that prices the negative externality of displacement where it originates — aligns individual firm incentives with collective welfare. It does not prohibit automation. It makes each firm internalise the demand destruction its automation causes, so the cost calculus includes the systemic cost, not just the private benefit.

This is the same mechanism that carbon taxes use for emissions. The firm still decides whether to automate. But the decision now includes the price of the damage.

The EU Blind Spot

Here is where this paper intersects with every edition of this newsletter.

The EU AI Act governs how AI behaves. It regulates risk classification, transparency, human oversight, data governance. It has nothing to say about the economic feedback loops AI creates when it behaves perfectly. A fully compliant, properly documented, transparently governed AI system that displaces 10,000 workers creates zero regulatory events under the AI Act. The displacement is not a malfunction. It is the product working as designed.

The AI Act was built for a world where the risk of AI is that it fails — biased outputs, opaque decisions, unsafe systems. The risk this paper describes is the opposite: AI that succeeds at scale, across every firm, simultaneously.

This is not a hypothetical timeline. It is happening now. The more than $600 billion in capex reported this week is building the infrastructure for exactly this outcome. The firms spending it are individually rational. The aggregate effect is the trap the paper describes.

So What?

Three implications for the enterprise leaders reading this newsletter:

First, your board's AI strategy discussion needs a demand-side chapter. Most enterprise AI business cases model cost savings (fewer workers, faster processes) without modelling what happens to the market when every competitor makes the same move. The paper gives you the framework to ask the question. If your CFO cannot answer "what happens to our customer base if our entire sector automates at the same rate we are planning," that is a strategic gap, not a philosophical one.

Second, the EU's regulatory blind spot is worth watching — and worth lobbying on, if you have the access. The AI Act review clause (Article 112) mandates a Commission evaluation by August 2029. The economic feedback loop described in this paper is exactly the kind of systemic risk that evaluation should cover. If your industry association has a position on the AI Act review, this paper belongs in the submission.

Third, the Pigouvian tax finding has direct implications for fiscal policy across Europe. France and Germany are already debating AI-specific taxation. Belgium's coalition agreement includes language on digital taxation. The question is whether any of them will price the automation externality specifically, or default to broad-brush revenue measures that raise money without changing incentives.

If your governance framework stops at compliance, you are governing half the problem.

The One Call to Make

Before next Thursday, forward the link to this paper — arxiv.org/abs/2603.20617 — to your CFO with one question: "Our AI business case models the cost savings. Does it model what happens to demand if our competitors are making the same cuts?"

Why this: The Q1 earnings prove every hyperscaler is building for an automation wave. Your AI strategy probably models the supply side (costs, efficiency, headcount) without modelling the demand side (what happens to revenue when the displaced workers were also your market). The paper gives the CFO the framework to quantify the gap. That conversation needs to happen before the next investment committee, not after.

If the CFO says "that's a macro problem, not ours": That is the Prisoner's Dilemma in one sentence. Log it. When the demand-side effects show up in your sector — and they will — you will want the timestamp showing you raised it.

If you skip it: Eighteen months from now, every analyst covering your sector will be asking about demand destruction from AI displacement. The companies that modelled it early will have answers. The ones that didn't will have excuses.

Builder Spotlight

Cequence Security — Governing the Agent Layer Before It Governs You

Profiling teams building for the European AI reality.

The company: Cequence Security, Sunnyvale, CA (with European enterprise customer base) What they do: AI-native API and bot security platform, now extended to agent governance with Agent Personas — infrastructure-level control over what AI agents can do, down to individual tool calls. Why now: The agentic AI wave needs a governance layer that operates at the infrastructure level, not the application level. Cequence is building it.

Cequence was founded in 2014 by Larry Link and Ameya Talwalkar, with deep roots in API security and bot management for financial services and e-commerce. The company has raised over $100M in total funding and counts multiple Fortune 500 companies as customers. Their Unified API Protection platform already governs billions of API transactions daily.

The Agent Personas release, announced April 28, extends that infrastructure into the agentic AI layer. Instead of governing AI agents at the application level — where each team builds its own guardrails — Cequence operates at the network and API gateway layer, enforcing identity, permissions, and audit trails for every tool call an agent makes. This is the architectural pattern that scales: agent governance as infrastructure, not as application code.

For European enterprises deploying autonomous agents under the AI Act, the Article 14 human oversight requirement demands exactly this kind of infrastructure-level auditability. You cannot satisfy "meaningful human oversight" with application-level logging when your agents are making cross-service tool calls that span multiple systems. The governance needs to sit where the calls happen — at the gateway.

Learn more: Cequence Security

This Week in Tech

OpenAI Launches Workspace Agents — The Custom GPT Successor

OpenAI shipped Workspace Agents in ChatGPT, explicitly positioning them as the successor to custom GPTs for enterprise organisations. Powered by Codex, they plug into Slack, Google Drive, Microsoft 365, Salesforce, Notion, and Atlassian. This is OpenAI's play for the agent-in-the-enterprise market that Microsoft (Copilot Studio), Google (Agentspace), and Anthropic (Claude for Work) are all chasing.

Why it matters: The enterprise agent market just got its clearest product-tier comparison. If you are evaluating agent platforms, you now have four vendors with production-grade offerings. The governance question from our Brief — how do you govern agent tool calls? — applies to all four. Microsoft's open-source toolkit is currently the only vendor-neutral governance layer available.

Enterprise AI Governance Gap Widens as Spending Surges

The ExcelMindCyber Institute published a report on April 28 highlighting that global enterprise AI spending is projected to reach $665B in 2026 — while structured AI governance programmes lag far behind. The report frames this as a "governance gap" that widens with every spending increase, because governance infrastructure scales linearly while AI deployment scales exponentially.

Why it matters: This is the data-point version of what the Deep Dive argues from theory. Enterprise spending is outpacing the governance infrastructure meant to contain it. If your governance budget is flat while your AI budget doubled, that ratio is your risk surface.

Avoca Hits $1B Valuation — AI Voice Agents for Field Services

Avoca, which builds AI voice agents for trades businesses (plumbers, HVAC, electricians), announced it raised more than $125M across Seed, Series A, and Series B at a $1B valuation. Series B led by Meritech and General Catalyst.

Why it matters: AI agents are not just an enterprise-tech phenomenon. A company serving plumbers just hit unicorn status. The displacement dynamics described in the Deep Dive will hit the trades and services sector — where workers are also local consumers — faster and harder than most enterprise leaders expect. The demand-side modelling isn't just for your sector. It is for your suppliers' and customers' sectors too.

Next Steps

What to read now?

  1. Falk & Tsoukalas — "The AI Layoff Trap" (2026) — UPenn / Boston University The paper behind this edition's Deep Dive. Formal model of AI displacement as a Prisoner's Dilemma, with equilibrium analysis of six policy interventions. Read sections 3 (the competitive model) and 5 (the Pigouvian tax) specifically. Dense but essential for anyone writing AI strategy documents.

  2. TNW — EU AI Act Omnibus deal fails after 12 hours of talks — The Next Web The most detailed account of the April 29 trilogue collapse. Read for the product-embedded exemption dispute — that is the sticking point that determines whether August 2 holds or the Omnibus passes.

  3. Microsoft Agent Governance Toolkit — GitHub Seven packages for governing autonomous AI agents. Evaluate against your current agent deployments. If you are deploying agents without infrastructure-level governance, this is the fastest path to a baseline.

  4. Amazon Q1 2026 Earnings — AWS hits 15-quarter growth high — Yahoo Finance Read for the first-ever AWS AI revenue run rate disclosure ($15B). That number becomes the benchmark for every cloud-AI procurement negotiation this year.

  5. Alphabet Q1 2026 — Google Cloud revenue up 63% — Quartz Google Cloud's growth rate is the headline, but the $460B backlog is the story. That is demand locked in for years — and it tells you where pricing is headed.

That’s it for this week.

Four hyperscalers spent Wednesday evening explaining how they will spend more than $600 billion building AI infrastructure. The day before, the EU spent twelve hours failing to agree on how to govern it. And a paper from two economists explained why neither the spending nor the governance gap will self-correct — because the incentive structure is a trap.

Until next Thursday, João

OnAbout.AI delivers strategic AI analysis to enterprise technology leaders. European governance lens. Vendor-agnostic. Actionable.

If this landed in your inbox from a forward — subscribe here to get the full picture every week.

Keep Reading