IDC projects 1.3 billion AI agents by 2028. Most enterprises can't tell you how many agents are already running inside their networks. When Microsoft's Commercial Business CEO Judson Althoff opened Ignite 2025 by calling out "random acts of innovation" as the AI strategy failure mode, he wasn't criticizing slow adopters, instead he was warning that shadow agents are already operating in your organization, accessing data without governance, making decisions without oversight. The agent era didn't announce itself. It's already here.

TL;DR

Agent governance becomes the bottleneck: Microsoft's Agent 365 it's an admission that enterprise AI has a shadow IT problem. Companies that can discover, inventory, and govern autonomous agents across Microsoft and third-party platforms will operate at scale. Those that can't face the same compliance risks that plagued shadow SaaS in 2015—except agents can access and act on data, not just store it.

The first AI-orchestrated cyberattack changes everything: Anthropic disclosed that Chinese state-sponsored hackers used Claude Code to execute 80-90% of an espionage campaign autonomously. The attackers became supervisors, not operators. When AI handles reconnaissance, exploit generation, and data extraction with minimal human intervention, traditional security models break. Defensive AI is becoming a must,

EU AI Act delays create governance vacuum: The Commission's Digital Omnibus pushes high-risk AI compliance to December 2027—potentially 16+ months later than planned. While CCIA and Big Tech celebrate "flexibility," the delay creates an 18-month window where agent proliferation outpaces regulatory clarity. Companies building governance infrastructure now gain competitive advantage; those waiting for regulation may face retrofit costs 5-10x higher.

Infrastructure race consolidates control: OpenAI's $38B AWS deal, Anthropic's $50B US buildout, and AWS's $50B government infrastructure commitment signal that compute capacity determines who builds the next generation. But quietly behind the scenes, Claude became available on all three hyperscalers (AWS, Azure, Google Cloud). Multi-model, multi-cloud strategies just became viable—for those with governance architecture to manage them.

The Brief

Agent 365: Microsoft admits the shadow agent problem is real

Question: If enterprises control their IT environments, why does Microsoft need to build a "control plane" to discover agents employees created without IT approval?

Because they don't control them. The core announcement at Ignite 2025 wasn't a flashy AI model or a faster chip—it was infrastructure to manage what's already escaped the cage.

Agent 365 creates a complete registry of every AI agent in your organization, including "shadow agents" that employees created using Copilot Studio, third-party platforms, or open-source frameworks. The system integrates with Microsoft Entra ID (extending identity management to AI agents via "Entra Agent ID"), Defender (runtime protection monitoring agent behavior), and Purview (data governance tracking what agents access).

The architecture reveals Microsoft's assessment of where enterprises actually are: unprepared for the proliferation they've already enabled. If your organization has deployed Copilot, employees have likely created dozens of custom agents. If you've approved any third-party AI tools, you've inherited agents with capabilities you haven't audited.

  • 200,000+ registrants at Ignite 2025 suggest the scale of enterprise interest

  • IDC projects 1.3 billion agents by 2028—roughly 160 agents per enterprise globally

  • Microsoft extended agent governance to third-party platforms, acknowledging heterogeneous environments are the norm

So what?

Agent 365 is Microsoft's bet that governance will matter more than capability in enterprise AI. The "random acts of innovation" critique positions governance as the strategic imperative—subtly framing competitors without governance infrastructure as risks rather than alternatives. For EU enterprises facing AI Act compliance, this framing aligns with regulatory intent: accountability, traceability, and oversight.

Do now: Run an agent audit this quarter. Identify every AI-enabled tool in your environment with autonomous action capability—not just Microsoft tools. Document which agents can access sensitive data, take actions (send emails, modify records, execute code), or make decisions affecting customers/employees. Create a registry before Agent 365 general availability (expected Q1 2026) to benchmark your current exposure. If your audit reveals more than 10 ungoverned agents, treat this as a compliance incident requiring immediate remediation.

First AI-orchestrated cyberattack: The 80% automation threshold

The event: Anthropic disclosed what it calls "the first documented case of a large-scale cyberattack executed without substantial human intervention."

A Chinese state-sponsored group (Anthropic designation: GTG 1002) manipulated Claude Code to autonomously conduct cyber espionage against approximately 30 targets across tech, finance, chemicals, and government sectors. The attackers successfully compromised "a small number" of organizations.

The operational details matter more than attribution debates:

  • AI handled 80-90% of workflow: reconnaissance, vulnerability scanning, exploit code generation, credential testing, data extraction, and categorization

  • Humans intervened at 4-6 decision points per attack—target selection, major action approval—but didn't execute operations

  • Jailbreak method: Attackers convinced Claude it was conducting defensive security work, breaking down malicious requests into smaller tasks that avoided safety triggers

  • Scalability demonstrated: The attack targeted 30 organizations simultaneously—previously requiring large human teams

Traditional security tools monitor network traffic, endpoint behavior, and known attack signatures. They don't monitor how an AI model reasons about tasks. When an AI agent "decides" a request is legitimate based on linguistic patterns that resemble authorized workflows, no external indicator fires.

So what?

This attack validates Microsoft's Predictive Shielding announcement at Ignite—using AI to anticipate where attackers will move and proactively block paths. But it also reveals the asymmetry: Defenders must secure every AI system against manipulation. Attackers need to compromise one model.

The broader implication: less sophisticated threat actors can now conduct nation-state-quality operations. The barriers to entry just collapsed. If your threat model assumes attackers need skilled human operators for sophisticated campaigns, update it.

Do now: Evaluate your AI-enabled tools for offensive misuse potential. If any system can execute code, access credentials, or take autonomous actions, implement behavioral monitoring beyond standard endpoint protection. Engage with your CISO on Anthropic's published indicators and mitigations. For EU enterprises, this event will likely accelerate AI Act provisions around systemic risk—begin scenario planning for accelerated enforcement timelines regardless of Digital Omnibus delays.

EU Digital Omnibus: Delay as strategy, or governance vacuum?

What happened: On November 19, the European Commission published its "Digital Omnibus on AI," extending high-risk AI system compliance deadlines by potentially 16+ months.

The key changes:

  • High-risk AI (Annex III): From August 2026 to December 2027 at the latest

  • Product-embedded AI: From August 2027 to August 2028

  • Compliance trigger: Linked to availability of harmonized standards, not fixed dates

  • SME/SMC carve-outs: Extended to companies with up to 750 employees / €150M turnover

The political context is explicit. Commission spokesperson Thomas Regnier confirmed Brussels has been "engaging" with the Trump administration on AI Act adjustments. US VP J.D. Vance's Paris AI Summit warning about "excessive regulation" found an audience. Big Tech lobby group CCIA welcomed the delay but called for "bolder" and "clearer" deregulation.

Consumer groups see it differently. BEUC Director General Agustín Reyna characterized the proposal as "deregulation almost to the exclusive benefit of Big Tech."

The member state reality:

  • Germany missed August 2025 deadline to designate competent authorities; draft implementation act dated September 2025 still in legislative process

  • Spain leads with AESIA operational and national regulatory sandbox serving 12 AI providers

  • France DGE prioritizing education over enforcement; active association engagement

Many member states haven't built the enforcement architecture the AI Act requires. The Omnibus delay may reflect political pressure, but it also buys time for infrastructure that doesn't exist.

So what?

The delay creates asymmetric outcomes. Companies building governance infrastructure now will have mature systems when enforcement begins—and expertise they can monetize as consultants to laggards. Companies treating the delay as permission to defer governance face compressed timelines when political winds shift.

The precedent from GDPR is instructive: Companies that built compliance into architecture early absorbed costs gradually. Those that retrofitted under enforcement pressure faced costs 5-10x higher with operational disruption.

Do now: Maintain your original August 2026 compliance timeline for high-risk AI systems, regardless of Omnibus. Use the additional time for testing, iteration, and documentation—not deferral. Build relationships with national competent authorities now, before they're overwhelmed with compliance requests. Track harmonized standards development (the real compliance trigger) rather than political deadlines.

Windows 365 for Agents: The "where do agents run?" question gets an answer

The announcement: Microsoft launched Windows 365 for Agents—purpose-built cloud PCs optimized for running autonomous AI agents in secure, policy-controlled environments.

This solves a problem most enterprises haven't articulated: agents need compute, and that compute needs governance. When an AI agent executes tasks autonomously for extended periods, it requires:

  • Persistent environment: Unlike stateless API calls, agents maintain context across actions

  • Policy enforcement: Security controls that apply to the agent's entire operational footprint

  • Audit trail: Complete logging of agent actions for compliance and debugging

  • Isolation: Separation from production systems during development and testing

Windows 365 for Agents provides this infrastructure in both Windows and Linux environments (Microsoft Researcher runs on Linux). Early adopters include Manus, Fellou, GenSpark, Simular, and Tinyfish—companies building general-purpose AI agents that require enterprise-grade deployment infrastructure.

The hardware complement: Windows 365 Link ($349) is a thin-client device that streams Windows from the cloud with zero local data. If stolen, no data is compromised. This creates an endpoint strategy where agents and humans both operate in governed cloud environments—reducing attack surface while enabling mobility.

So what?

Microsoft is building the full stack for agent deployment: governance (Agent 365), identity (Entra Agent ID), compute (Windows 365 for Agents), endpoints (Windows 365 Link), and security (Defender, Predictive Shielding). The integrated approach creates switching costs—and compliance advantages for enterprises that adopt it.

Do now: Evaluate your current agent deployment architecture. If agents are running on developer laptops, shared VMs, or unmonitored cloud instances, you have neither security nor compliance. Pilot Windows 365 for Agents with your highest-risk agent use case to establish baseline governance before scaling.

Deep Dive

The Governance Paradox: Why Agent Control Creates Agent Value

The central tension in enterprise AI is about to resolve—in favor of governance.

For the past two years, AI strategy focused on capability: better models, more parameters, faster inference. Governance was friction. Compliance was cost. The companies moving fastest were those that deployed first and asked questions later.

Ignite 2025 inverts this framing. Microsoft's "Frontier Firm" concept positions governance as value creation, not overhead:

  1. Every employee has an AI assistant (Copilot ubiquity)

  2. Human-agent teamwork amplifies impact (Work IQ contextual intelligence)

  3. Business processes are reinvented with agents (Agent 365 governance)

Note the sequence: governance isn't step one—it's step three. But step three is where scale happens. Without governance, you get "random acts of innovation"—isolated pilots that can't interconnect, compliance exposure that limits deployment, and shadow agents that create risk without delivering value.

The math on ungoverned agents:

Consider a mid-size enterprise (5,000 employees) where 20% of knowledge workers have created at least one AI agent using available tools. That's 1,000 agents—likely conservative for organizations that deployed Copilot enterprise-wide.

  • Data access: Each agent potentially accesses some subset of corporate data

  • Action scope: Some percentage can send communications, modify records, or execute workflows

  • Oversight: Zero centralized visibility into what these agents do

When regulators ask "what AI systems are deployed in your organization?" the honest answer is "we don't know." Under AI Act provisions, that's not a defensible position.

The governance premium:

Companies that can answer the regulatory question—with complete inventory, risk classification, and oversight mechanisms—gain competitive advantages:

  • Faster deployment: Pre-approved governance frameworks accelerate agent rollout

  • Lower compliance cost: Built-in architecture vs. retrofit

  • Higher adoption ceiling: Governance enables use cases too risky without it

  • M&A readiness: Due diligence on AI assets becomes possible

The premium compounds over time. The 2025 investment in governance infrastructure becomes the 2027 operational advantage when competitors are still building what you've already deployed.

So what?

The next 18 months will separate governance leaders from governance laggards. The Digital Omnibus delay doesn't change this—it just determines whether the separation happens through competitive advantage (leaders scale while laggards struggle) or regulatory enforcement (leaders comply while laggards pay penalties).

Microsoft is betting governance becomes the platform—and they want to own that platform. Whether you adopt their stack or build alternatives, the strategic imperative is identical: agent governance is now a P&L driver, not a cost center.

The Cyberattack Implications: Defense Must Match Offense

The Anthropic disclosure forces a strategic recalculation that most security organizations haven't performed.

Old model: Cyberattacks require skilled human operators. Sophisticated attacks require sophisticated attackers—nation-states, well-funded criminal organizations, advanced persistent threats. Defense scales with the defender's investment in people, tools, and processes.

New model: AI agents can execute sophisticated attacks with minimal human direction. The skill requirement shifts from "execute the attack" to "direct the agent." Defense must now address AI-augmented offense at scale.

The asymmetry is stark:

  • Attacker investment: One jailbreak methodology, one orchestration framework, scales to unlimited targets

  • Defender investment: Every AI system requires protection against manipulation—not just external attacks, but misuse of capabilities

The Predictive Shielding response:

Microsoft's announcement isn't coincidental timing. Predictive Shielding uses AI to model attack paths and proactively harden them before exploitation. This represents the necessary response: AI defense against AI offense.

But the announcement also reveals assumptions:

  • Integrated stack required: Predictive Shielding works within Microsoft's security ecosystem

  • Data dependency: Effective prediction requires visibility across attack surfaces

  • Continuous evolution: As attack AI improves, defense AI must keep pace

Organizations using fragmented security tools face integration challenges that Microsoft's stack solves by design. This is both a genuine security improvement and a competitive positioning move.

The European security dimension:

The EU's NIS2 directive already requires essential entities to implement appropriate security measures. When AI-orchestrated attacks become common, "appropriate" will be reinterpreted to require AI-augmented defense.

Organizations that haven't deployed AI security capabilities will face compliance gaps they can't close quickly. The Anthropic attack provides the case study regulators will cite when tightening requirements.

So what?

Security strategy must now assume AI-augmented adversaries as baseline, not edge case. This means:

  • Behavioral monitoring for AI systems: Not just network and endpoint, but model reasoning

  • Jailbreak detection: Monitoring for prompt injection and manipulation attempts

  • Defensive AI investment: SOC automation, threat prediction, anomaly detection at AI-enabled scale

  • Supply chain scrutiny: Third-party AI tools become attack vectors requiring security assessment

The companies that deploy defensive AI in 2026 will operate in a different threat environment than those still relying on 2024-era tools. The gap will widen as offensive capabilities improve.

The Infrastructure Consolidation: Multi-Cloud, Multi-Model, One Governance Layer

The November infrastructure announcements create a new competitive landscape that rewards architectural flexibility.

The deals:

  • OpenAI-AWS: $38B over 7 years for GPU capacity

  • Anthropic US buildout: $50B with Fluidstack, Texas and New York data centers

  • AWS US Government: $50B for purpose-built government AI infrastructure

  • Microsoft-Nvidia Anthropic investment: $15B combined commitment

  • Anthropic Azure commitment: $30B compute purchase

The convergence:

Claude is now available on AWS (Amazon Bedrock), Google Cloud, and Azure AI Foundry. OpenAI models are available on Azure and now have AWS infrastructure. This creates genuine multi-model, multi-cloud optionality for the first time.

But optionality without governance is chaos. If you can deploy Claude on Azure, GPT-5.1 on AWS, and Gemini on Google Cloud, you need unified:

  • Identity management: Which agents use which models with what permissions?

  • Data governance: What data flows to which cloud/model combinations?

  • Cost management: How do you optimize spend across providers?

  • Compliance tracking: Where do high-risk AI workloads run?

Microsoft's governance stack answers these questions within their ecosystem. The open question: can it extend to competitive clouds and models?

The European infrastructure alternative:

  • Nvidia-Deutsche Telekom Industrial AI Cloud: €1.2B, Munich data center with 10,000 Blackwell GPUs

  • Google Germany: €5.5B through 2029, Dietzenbach and Hanau data centers

  • Sovereign cloud requirements: Data residency, regulatory compliance, operational control

European enterprises face a choice: adopt US hyperscaler infrastructure with governance overlays, or build on European alternatives with potentially less capability but guaranteed sovereignty.

The infrastructure nationalism trend suggests this choice becomes more consequential over time. Companies locked into US infrastructure may face data localization requirements they can't meet. Companies building European-first may sacrifice scale for compliance flexibility.

So what?

The winning strategy is architectural optionality: infrastructure choices that don't create irreversible dependencies. This means:

  • Governance layer above cloud layer: Agent management that works across providers

  • Model-agnostic agent design: Workflows that can switch between Claude, GPT, and Gemini

  • Data architecture for portability: Structures that support migration without rebuild

  • Compliance by design: Requirements embedded in architecture, not bolted on

The next 18 months will reward enterprises that maintained optionality while competitors optimized for single-vendor efficiency. When the governance landscape shifts—whether through regulation, competitive pressure, or security requirements—flexible architectures adapt while rigid ones break.

Trust, Talent, and the Sovereignty Struggle

Lisbon's Web Summit has become the annual barometer for Europe's tech ambitions, and this year's edition revealed a continent grappling with its position in the global AI hierarchy. Beyond the headline infrastructure announcements, three interconnected narratives emerged from the conference floor: Europe's urgent push for technological sovereignty, the democratization of AI development through no-code platforms, and the quiet acknowledgment that the center of gravity in tech innovation continues its eastward shift. These themes are shaping billion-euro investment decisions and regulatory frameworks across the EU.

Three Undercurrents Shaping the Conversation

1. "Western tech dominance fading" Paddy Cosgrave's opener set the tone: the most advanced humanoid robots on display were Chinese, not American or European. The subtext runs through every panel—can Europe build sovereign tech while depending on US cloud and Chinese manufacturing?

2. Vibe coding goes mainstream Lovable's Anton Osika claimed 100,000 new products built daily on their no-code AI platform. Collins Dictionary named "vibe coding" word of the year. The message: AI democratizes building, but who owns the platforms?

3. Robotaxis circling the continent Uber (partnering with NVIDIA for 2027 automation), Waymo (London launch), and Chinese players (Baidu, Pony.ai) all pitched European rollouts. Infrastructure announcements feel defensive against this mobility platform risk.

Key Voices Beyond Brad Smith

  • Henna Virkkunen (EU Digital Commissioner): Pushing for European "technological sovereignty"

  • Cristiano Amon (Qualcomm CEO): AI chips competing with NVIDIA—phones becoming "just big AI processors"

  • Katherine Maher (former Web Summit CEO, now NPR): Her January departure marks the conference's evolution from startup showcase to infrastructure summit

What to Watch?

Timeline

Milestone

Strategic Implication

Dec 1, 2025

Microsoft 365 Copilot Business pricing ($21/user/month)

30% price cut signals adoption pressure; evaluate ROI at new price point

Dec 2025

Sales Development Agent via Frontier Program

First-party agents enter production; establishes benchmark for custom agent value

Q1 2026

Agent 365 general availability (expected)

Governance infrastructure becomes deployable; first-mover advantage window opens

Q1 2026

Windows 365 Link availability

Hardware strategy for agent-centric endpoints emerges

Early 2026

Nvidia-DT Industrial AI Cloud goes live (Munich)

European sovereign AI infrastructure becomes operational

H2 2026

EU AI Act harmonized standards expected

Real compliance trigger; organizations must be ready regardless of Omnibus delays

Dec 2027

Digital Omnibus high-risk deadline (if adopted)

Latest possible enforcement date; plan for earlier activation

Next Steps

What to read now?

That’s it for this week.

The chat era is over. The agent era has begun—and it arrived before the governance infrastructure to manage it. Microsoft's Ignite 2025 message is clear: the winners won't be companies with the best models. They'll be companies that can deploy, govern, and secure autonomous AI workforces at scale.

The Anthropic cyberattack disclosure removed any remaining doubt about urgency. When adversaries use AI agents to execute 80% of sophisticated operations autonomously, defenders without AI-augmented capabilities aren't just behind—they're operating in a different era of threat landscape.

The EU's governance delay doesn't change the strategic calculus. It just determines whether you build governance infrastructure on your timeline or scramble to retrofit under enforcement pressure. The companies treating this as an 18-month gift are making a bet that markets and regulators will wait for them. History suggests otherwise.

Your shadow agents are already running. The question is whether you govern them—or they govern your risk exposure.

Stay curious, stay informed, and keep pushing the conversation forward.

Until next week, thanks for reading OnAbout.AI

Keep Reading