Last week, Anthropic deliberately withheld its most powerful model because the security risk was too high — and assembled a consortium of its biggest competitors to address the vulnerabilities before anyone could weaponize the capability.

The model is Claude Mythos. It found thousands of zero-day vulnerabilities across every major operating system and browser — including one in OpenBSD that had been present for 27 years.

Anthropic's response was not to ship it. It was to call Apple, Microsoft, Google, AWS, CrowdStrike, JPMorgan, Nvidia, Palo Alto Networks, and the Linux Foundation. Put $100 million on the table. And launch Project Glasswing — a consortium to fix the vulnerabilities before someone weaponizes the capability.

This edition unpacks what Glasswing means for enterprise AI governance, why AI agents are now the most attacked surface in the enterprise, and — in the Deep Dive — why every knowledge worker just became a product owner of an AI team they never asked to manage.

TL;DR
  • Anthropic's Claude Mythos found thousands of zero-day vulnerabilities across every major OS and browser, including a 27-year-old OpenBSD flaw and a 16-year-old FFmpeg bug that automated testing encountered 5 million times without detecting. Anthropic withheld the model and launched Project Glasswing with 11 named partners, up to $100M in usage credits, and $4M in open-source security donations.

  • Cisco shipped Zero Trust for AI agents at RSA 2026. According to secondary reporting around the launch, the vast majority of large enterprises are experimenting with AI agents, but only a small fraction have moved them into production. The governance gap is the bottleneck.

  • NIST launched the AI Agent Standards Initiative with three strategic pillars: industry-led standards, community-led open-source protocols, and agent security research. The Initiative is also developing SP 800-53 control overlays relevant to agentic AI systems.

  • Microsoft's RSAC report confirms AI is now embedded across the full attack lifecycle. In AI-enabled phishing scenarios, click-through rates rose from roughly 12% to 54%. The Tycoon2FA platform was linked to nearly 100,000 compromised organizations since 2023, and at its peak accounted for roughly 62% of phishing attempts Microsoft blocked monthly.

The Brief

Project Glasswing: When the Safety Lab Does What the EU AI Act Asks Everyone to Do

On April 7, Anthropic announced Claude Mythos Preview — a frontier model that scores 83.1% on CyberGym's vulnerability reproduction benchmark, significantly outperforming Claude Opus 4.6 at 66.6%. The model autonomously identified a 27-year-old remote code execution vulnerability in OpenBSD (CVE-2026-4747), a 16-year-old encoding bug in FFmpeg that automated testing encountered 5 million times without detection, and chained multiple Linux kernel vulnerabilities to escalate from user access to complete system control.

Anthropic's decision was not to release it publicly. Instead, they launched Project Glasswing — a consortium of 11 named partners (AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks) with Anthropic as the coordinating organization, plus roughly 40–45 additional organizations with access. Anthropic committed up to $100 million in usage credits, $2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5 million to the Apache Software Foundation.

The governance structure matters. Anthropic committed to publishing findings within 90 days. Participating organizations must share their findings with the broader industry. The eventual goal is an independent third-party body coordinating public and private sector cybersecurity efforts.

Glasswing resembles the kind of pre-deployment risk assessment, proportionate mitigation, and transparency logic that the EU AI Act is pushing toward — particularly in Article 9 (risk management for high-risk systems) and Article 55 (cybersecurity obligations for GPAI providers with systemic risk). The fit is not one-to-one, but the principle is the same: assess the risk before you deploy, mitigate proportionately, and be transparent about what the system can do. This is the first time a major AI company has voluntarily applied that logic at frontier scale — not because regulators forced it, but because the risk assessment demanded it.

Do now: Use Glasswing as a benchmark in your next AI governance review. Ask your AI providers: if you discovered a capability this dangerous, what would your process be? If they cannot articulate one, that is your first compliance gap.

Cisco Ships Zero Trust for AI Agents — And Reveals the Governance Gap

At RSA 2026, Cisco unveiled a Zero Trust architecture built specifically for autonomous AI agents. The framework includes Agent Identity Management through Duo IAM — registering non-human identities, binding them to accountable human owners, and enforcing fine-grained, time-bound permissions. Alongside it: AI Defense Explorer Edition for self-serve agent red-teaming, and DefenseClaw, an open-source secure agent framework.

The most revealing signal came from Cisco's messaging around the launch: enterprise experimentation with AI agents is widespread, but production deployments remain rare. The gap is not capability. It is governance — identity, access control, and the risk that agents act beyond their intended scope.

This maps directly to NIST's AI Agent Standards Initiative, launched in February, which is organized around three strategic pillars: facilitating industry-led agent standards, fostering community-led open-source protocol development, and advancing research in agent security and identity. NIST collected extensive public input (via its RFI on AI Agent Security and a concept paper on AI Agent Identity and Authorization) and is developing SP 800-53 control overlays relevant to agentic AI systems. The recurring concerns across submissions: agent identity, least-privilege access, task-scoped permissions, and auditability.

Do now: If your organization is piloting AI agents, map your current identity and access controls against Cisco's framework and NIST's emerging themes (identity, least-privilege, task-scoped permissions, auditability). The low production adoption rate is not a technology problem. It is a governance readiness problem. Close the gap before your pilot becomes a liability.

Microsoft Confirms: AI Is Now the Cyberattack Surface

Microsoft's RSAC 2026 presentation confirmed what the supply chain attacks from two weeks ago signalled: threat actors have embedded AI across the entire attack lifecycle. Not as an experiment. As standard tradecraft.

The numbers are stark. In AI-enabled phishing scenarios, Microsoft reported click-through rates rising from roughly 12% to 54% — a function of dramatically improved lure quality. The Tycoon2FA platform — a subscription phishing-as-a-service operation — generated tens of millions of phishing emails per month, was linked to nearly 100,000 compromised organizations since 2023, and at its peak accounted for roughly 62% of phishing attempts Microsoft was blocking monthly. The barrier to launching sophisticated attacks has collapsed. What once required nation-state resources is now accessible to motivated individuals with the right tooling.

Critically, Microsoft noted that a human remains in the loop for most AI-assisted attacks — these are not autonomous AI campaigns. But the tempo, iteration speed, and scale have upgraded dramatically. The agent ecosystem is becoming the most attacked surface in the enterprise.

Do now: Brief your security team on the Tycoon2FA numbers. If your anti-phishing training was calibrated for 2024-era lure quality, it needs updating. Specifically: review whether your email security stack is evaluating AI-generated content patterns, not just known signatures.

Chrome's Fourth Zero-Day of 2026 — and It's Only April

Google released an out-of-band Chrome 146 update to fix CVE-2026-5281, a use-after-free vulnerability in Dawn (WebGPU implementation) that was actively exploited in the wild. This is the fourth Chrome zero-day patched in 2026. CISA added it to the Known Exploited Vulnerabilities catalog on April 1, requiring federal agencies to patch by April 15.

The vulnerability allowed remote code execution via a crafted HTML page after compromising the renderer process. In the context of Project Glasswing — where Claude Mythos found vulnerabilities across "every major web browser" — this is a concrete reminder that browser attack surfaces are far from exhausted.

Do now: Verify Chrome auto-update is enforced across your fleet. Version 146.0.7680.177/178 or later. If your organization uses WebGPU-enabled applications, prioritize testing.

19 US AI Bills Signed Into Law Since Mid-March

The pace of AI legislation in the United States has accelerated sharply. Since mid-March, the number of new AI laws passed in 2026 has jumped from 6 to 25, with 27 additional bills having passed both chambers. Utah sent nine AI-related bills to the governor's desk, becoming one of the busiest state legislatures on AI this spring. California issued an executive order on March 30 mandating AI safeguards — including bias protections and watermarking of AI-generated media — for companies seeking state contracts.

For European enterprises, this fragmentation matters. If you operate in the US market, your AI governance framework now needs to account for a patchwork of state-level requirements alongside the EU AI Act. The compliance surface area is expanding on both sides of the Atlantic simultaneously.

Do now: If your organization deploys AI systems in the US market, task your legal team with mapping which new state-level AI requirements apply to your operations. Start with California and Utah — they are setting the pace.

OpenAI's Industrial Policy Paper: Progressive Vision or Regulatory Capture?

OpenAI published a 13-page paper titled "Industrial Policy for the Intelligence Age" proposing a government-backed framework for managing superintelligence. The proposals include public wealth funds tied to AI-driven growth, portable benefits, a "right to AI" for schools, 32-hour workweeks, and real-time monitoring of labor disruption with wage insurance and direct cash assistance.

The reception has been polarized. Policy experts acknowledge that governments are behind on AI industrial policy. Critics — including Fortune's assessment — call it "regulatory nihilism wrapped in progressive language," noting that OpenAI benefits from the framework it proposes: heavy government investment in AI infrastructure, light-touch regulation, and public funding for the adoption of tools OpenAI sells.

For European enterprise leaders, the paper matters less for its specific proposals and more for the signal: the largest AI companies are actively shaping the regulatory environment they will operate in. It is worth noting the contrast: the EU AI Act was developed through a primarily institutional legislative process, while the emerging US framework is being shaped with considerably more direct industry input. That asymmetry — in my analysis — will define competitive dynamics for the next decade.

Do now: Read the paper. Not because you will implement its proposals, but because your US-headquartered AI vendors will reference it to justify their compliance positions. Know what they are citing.

Deep Dive

Project Glasswing: When the Safety Lab Does What the EU AI Act Asks Everyone to Do

This week's Deep Dive is analysis and interpretation, not reporting. It connects management theory, science fiction, and enterprise AI adoption to argue that the shift to AI tooling is a role change, not a skills upgrade.

We are not becoming better workers. We are becoming product owners of AI teams we never asked to manage.

A friend told me the other day that since their company rolled out AI tooling across the org, their boss hasn't reduced expectations. The opposite happened. The assumption is now: you have AI, so you should deliver more. Faster. At higher quality. Across more domains.

They didn't get a raise. They got a team of probabilistic interns that hallucinate, need constant supervision, and have no institutional memory. And somehow, they're now responsible for the output.

The Zapier Thesis: Every Worker Becomes an Orchestrator

Wade Foster, CEO of Zapier, crystallized this in a recent blog post. His vision of the future org chart is not 100 people in 10 functional teams. It is 50 domain experts, each orchestrating their own team of AI agents. The ideal worker, in his framing, is someone who understands the problem deeply, knows the customer, and can assemble an agent team to execute. Not a coder. Not a designer. Not a marketer. A product owner.

If that doesn't make you pause, it should. Because we are describing a fundamental shift in what it means to work — and we have seen this movie before, across both management theory and science fiction.

Peter Drucker introduced the idea of "knowledge work" in 1959 and later developed the concept of the knowledge worker in The Effective Executive. His thesis was elegant: economic value was shifting from manual labor to intellectual labor. The 20th century's great management challenge was making manual workers productive. The 21st century's challenge would be making knowledge workers productive. He argued that knowledge workers needed to be led, not managed, and that their productivity depended on autonomy, purpose, and the ability to see the results of their work.

Now look at where we are. AI is not replacing knowledge workers. It is redefining what knowledge work means. We are moving from "applying knowledge to work" to "applying judgment to AI output." The knowledge worker of 2026 doesn't produce the analysis. They commission it, review it, correct it, and take accountability for it. Drucker predicted that the most valuable asset of a 21st-century institution would be its knowledge workers and their productivity. What he didn't predict is that those knowledge workers would spend half their day managing digital colleagues who are brilliant but unreliable.

So what? The shift from knowledge worker to AI team lead is not a skills upgrade. It is a role change. PwC's 2026 AI Predictions report argues that technology delivers only a fraction of an initiative's value — the larger share comes from redesigning work itself. If your organization is deploying AI without redesigning roles, compensation structures, and management expectations, you are setting up for a productivity illusion where everyone is busier but nothing is better.

Ender's Game Was a Product Management Manual

This is where science fiction got there first.

In Orson Scott Card's Ender's Game (1985), a child prodigy is trained to command humanity's military forces. The twist — the one that makes the book a leadership text referenced in military education — is that Ender doesn't fight. He commands. He builds toon leaders who think independently, trains them to make decisions under uncertainty, and creates systems where his units act without waiting for instruction. His genius is not tactical brilliance. It is the ability to decompose complex goals into autonomous sub-teams and orchestrate their execution.

Ender, in 1985, was a product owner managing an agent swarm.

The parallel is almost uncomfortable. In the novel, Ender succeeds because he doesn't micromanage. He defines intent, builds capable sub-units, reviews outcomes, and iterates. He learns to balance control with autonomy — too much oversight and the system becomes rigid, too little and it falls apart. This is exactly the tension Foster describes at Zapier: workflows are deterministic and reliable but brittle. Agents are flexible and creative but costly and unpredictable. The human's job is to know when to lock something into a workflow and when to let an agent figure it out.

But there is a darker thread in Ender's story. The cost of commanding is isolation. Ender becomes separated from his peers. The weight of responsibility — of being the one accountable for systems he does not fully control — takes a psychological toll. Card makes this point deliberately: the best leaders carry a burden that their teams never see. When we tell every knowledge worker that they are now a team lead of AI agents, we are distributing that burden across the entire workforce without the support structures that leaders traditionally receive.

So what? The pattern across the industry is clear: widespread experimentation with AI agents, minimal production deployment. The governance gap is not just technical. It is human. Organizations need to invest in the management skills that AI orchestration requires — systems thinking, quality assurance instincts, failure mode recognition — or they will burn out the very people they are trying to make more productive.

Asimov's Insight: Governing Probabilistic Systems Is a Different Skill

Isaac Asimov explored a different angle in the Foundation series. Hari Seldon's psychohistory is a system for managing probabilistic outcomes at scale. Seldon doesn't control events. He models them, sets up initial conditions, and trusts the system to self-correct within boundaries. When the system deviates, human intervention is required — not to do the work, but to recalibrate. This is precisely the manufacturing metaphor Foster uses: when a machine produces a defective widget, you don't fix the widget. You fix the machine. Humans add value not by producing output, but by designing and tuning the systems that produce it.

Asimov's insight, written decades before anyone imagined large language models, is that governing probabilistic systems requires a different kind of intelligence than executing deterministic ones. It requires pattern recognition across failures, comfort with uncertainty, and the discipline to intervene only when your intervention will actually improve the system. Most people are not trained for this. Most organizations do not reward it.

The accountant who now manages an AI that drafts reports is not doing less accounting. They are doing accounting plus quality control plus prompt engineering plus output validation. The output expectation went up. The title stayed the same. The compensation didn't move.

Drucker would have had something sharp to say about this. He believed organizations fail not because they adopt the wrong technologies, but because they pursue activity without purpose. His question for the AI age would not be "how do we deploy more agents?" It would be: "to what end? Who is responsible for what the algorithm does? And are we confusing busyness with productivity — again?"

So what? The organizations that will thrive are the ones that treat this transition honestly: redefine roles, restructure compensation, invest in the management skills that orchestration requires, and stop pretending that giving someone a Copilot license is the same as giving them a team. Ender won his war. But he didn't do it with superior firepower. He did it because someone invested years in teaching him how to think about systems, how to build teams that could act independently, and how to carry the weight of decisions made under uncertainty. We are asking every knowledge worker to become Ender. The least we can do is give them Battle School.

Next Steps

  1. This week: Use the Glasswing announcement to audit your own AI risk assessment process. If you discovered a model capability that posed systemic risk, do you have a documented escalation path? If not, draft one.

  2. This month: Map your AI agent pilots against Cisco's Zero Trust framework and NIST's emerging agent security themes (identity, least-privilege, task-scoped permissions, auditability). Identify which controls are missing before you move from pilot to production.

  3. This quarter: Conduct a role redesign assessment for teams using AI tooling. Identify where "knowledge worker plus AI assistant" has actually become "AI team lead" — and whether your job descriptions, training programs, and compensation structures reflect that reality.

Builder Spotlight

Trent AI — Securing AI Agents Across Their Entire Lifecycle

Profiling teams building for the European AI reality.

The company: Trent AI, London, UK What they do: Layered security platform for autonomous AI agents — from development through deployment to runtime. Why now: Enterprise experimentation with AI agents is widespread, but production adoption remains low. The security gap is the single biggest blocker.

Founded in 2025 by former AWS engineering team leaders, Trent AI emerged from stealth in April 2026 with $13 million in seed funding led by LocalGlobe and Cambridge Innovation Capital. The company builds what most enterprise security stacks are missing: purpose-built security controls for AI agents that act autonomously — executing API calls, modifying infrastructure, and making decisions without human approval.

The timing is deliberate. NIST's AI Agent Standards Initiative identified agent identity, least-privilege access, and auditability as recurring concerns. Cisco just shipped Zero Trust for AI agents. Anthropic just demonstrated that AI can find vulnerabilities faster than any human team. Trent AI is building the infrastructure that sits between these capabilities and your production environment — ensuring that agents operate within defined boundaries, that their actions are auditable, and that drift from intended behavior triggers alerts before it triggers incidents.

For enterprise teams moving AI agents from pilot to production, Trent AI is the kind of tooling that turns governance from a blocker into an enabler.

This Week in Tech

Google Patches Fourth Chrome Zero-Day of 2026

Google released an out-of-band update for Chrome 146 to fix CVE-2026-5281, a use-after-free bug in the WebGPU implementation (Dawn) that was actively exploited in the wild. CISA added it to the Known Exploited Vulnerabilities catalog, requiring federal agencies to patch by April 15. This is the fourth actively exploited Chrome zero-day this year — and we are only in April.

Why it matters: Four browser zero-days in four months sets a pace that should concern any enterprise security team. In the same week that Claude Mythos demonstrated it can find vulnerabilities across every major browser autonomously, a real-world exploit was burning in Chrome. The offensive capability gap is closing fast.

OpenAI Hits $25B Annualized Revenue, Eyes IPO

OpenAI has surpassed $25 billion in annualized revenue and is reportedly taking early steps toward a public listing, potentially as late Q4 2026. The company simultaneously published its "Industrial Policy for the Intelligence Age" paper — a 13-page blueprint for government AI investment that critics call self-serving.

Why it matters: An OpenAI IPO would be the defining moment for AI market valuations. For European enterprises locked into OpenAI APIs, it also raises questions about long-term pricing, data governance commitments, and whether a public company under shareholder pressure maintains the same safety posture as a capped-profit entity.

Docker Engine Auth Bypass Gets CVSS 8.8

A high-severity vulnerability (CVE-2026-34040) in Docker Engine allows attackers to bypass authorization plugins — a flaw stemming from an incomplete fix for CVE-2024-41110. The vulnerability affects any Docker deployment relying on auth plugins for access control.

Why it matters: Docker containers are the foundation of most enterprise AI deployment pipelines. An authorization bypass in the engine itself — not a misconfiguration, but a code-level flaw — means that even correctly configured environments may be exposed. If you run AI workloads in Docker, patch immediately.

EU Digital Omnibus Readiness Scorecard

EU Digital Omnibus Readiness Scorecard

15-question self-assessment covering the AI Act, GDPR, NIS2, and DORA changes from the EU Digital Omnibus simplification package. Score your organisation's readiness in 20 minutes. Includes...

$0.00 usd

Build Lab — New Series

Your AI Has Amnesia. Here's How to Fix It.

Launching a new section exploring the tools, architectures, and workflows that make AI actually useful — for daily work and at enterprise scale.

Two ideas collided this weekend that shouldn't live apart.

Andrej Karpathy just shared his approach to LLM-powered knowledge bases: index raw sources — articles, papers, repos — into a directory, have the LLM compile a structured wiki from them, then query it for complex research questions. No fancy RAG pipelines. No vector databases. Just markdown, good indexing, and an LLM that maintains everything. He runs his at ~400K words across ~100 articles, and the LLM handles it without retrieval augmentation.

The architecture is a three-folder system: raw/ stores unstructured source material. wiki/ holds LLM-compiled summary articles, one per concept. index.md is a master map sized to fit the model's context window. At query time, the LLM reads the index first, identifies relevant articles, and loads only those. No embedding, no vector search. Compilation happens once; maintenance is incremental.

Separately, MemPalace launched as an open-source AI memory system. Built by Milla Jovovich and Ben Sigman, MIT-licensed, and rapidly gaining traction on GitHub. It stores your entire conversation history — every decision, every debugging session, every architectural choice — in a searchable spatial hierarchy. Wings (people and projects), halls (types of memory), rooms (specific ideas). The project claims 96.6% raw recall on the LongMemEval benchmark (though independent reviewers have debated the methodology, particularly around hybrid scoring). 170-token startup load. Fully local on ChromaDB and SQLite. Zero API costs.

These solve two halves of the same problem.

Knowledge base = what the domain knows. Industry regulations, technical documentation, research, competitive intelligence. It exists whether or not you have ever interacted with it.

Memory system = what you know. Your decisions, your context, your history with a problem. It only exists because you created it through work.

Most AI deployments today have neither. Your AI agent starts every conversation from zero. It doesn't know your company's risk appetite. It doesn't remember that you rejected approach A last Tuesday for reasons that still apply. It can't connect your Q1 compliance review to the new system you are classifying under the EU AI Act.

An AI agent with both layers can answer questions like: "Based on our earlier risk assessment and the current regulatory guidance, how should we classify this new system under Article 6?"

That requires domain knowledge AND organizational memory. Today, most teams have neither persisted properly.


This series will explore how to build both — starting with the architecture, then going deep on practical implementation for daily work and enterprise scale. These tools should be empowering everyone, not just engineers with custom setups.

Next edition: deep dive into the Knowledge Base layer.

Next Steps

What to read now?

  1. "Project Glasswing: Securing Critical Software for the AI Era" — Anthropic The primary source on Glasswing. Read the governance structure, the 90-day disclosure commitment, and the specific vulnerability examples. This is the document you will reference when your board asks what responsible AI risk management looks like.

  2. "The Glasswing Paradox: The Thing That Can Break Everything Is Also The Thing That Fixes Everything" — Picus Security The best independent analysis of Glasswing's implications. Frames the core tension clearly: the same capability that secures your systems is the capability that threatens them.

  3. "LLM Knowledge Bases" — Andrej Karpathy The original Gist. 15-minute read that will change how you think about RAG, vector databases, and persistent AI knowledge. Pair it with MemPalace for the full picture.

  4. "Industrial Policy for the Intelligence Age" — OpenAI Agree or disagree, this paper will shape the US regulatory conversation. Know what your AI vendors are citing before they cite it at you.

That’s it for this week.

This week's edition had a model too dangerous to ship, a workforce being quietly promoted to AI team leads without the title or the pay, and a memory problem that most enterprise AI deployments haven't even named yet. The thread connecting all three is governance — of capabilities, of people, and of knowledge. The organizations that get this right will not be the ones with the most AI. They will be the ones that know what their AI knows, what it doesn't, and who is accountable when it gets it wrong.

Until next Thursday, João

OnAbout.AI delivers strategic AI analysis to enterprise technology leaders. European governance lens. Vendor-agnostic. Actionable.

If this landed in your inbox from a forward — subscribe here to get the full picture every week.

Keep Reading