• On About AI
  • Posts
  • Back-to-School AI, DORA’s Wake-Up Call, and Vendor Risk in the Spotlight

Back-to-School AI, DORA’s Wake-Up Call, and Vendor Risk in the Spotlight

Why seasonal surges, platform maturity, and third-party breaches matter more than model labels

September is proving once again that AI adoption doesn’t wait for perfect policies or polished platforms. OpenAI’s daily usage is spiking with schools back in session, Google’s new DORA report shows AI is amplifying team dynamics rather than fixing them, and a wave of vendor cyberattacks reminds us that third-party risk is now operational reality. There’s a clear pattern that AI isn’t a side project, instead it’s an amplifier of whatever systems and safeguards you already have in place. Do you have the foundations in place strong enough to handle the surge?

TL;DR
  • Back-to-school + lighter models = usage spikes that preview how everyday AI will pressure governance and enablement inside the enterprise.

  • AI amplifies your internal process: the DORA lens shows platform engineering and feedback loops matter more than model shininess. If your internal kitchen is good it will get better, otherwise everyone will see how bad it is.

  • Third-party incidents turned into outages, reminding us that vendors’ AI practices are effectively your data posture.

  • Do now: contain shadow AI, fund golden paths, and add AI clauses to vendor due diligence, so amplification works in your favour.

The Brief

Back-to-School, Back-to-AI

What happened: September brings a visible usage surge as schools restart and cheaper, simpler models spread; we’re watching consumer behaviour shaping work patterns in real time.
Why it matters: Seasonality is a capability test—if education drives token spikes, your year-end close or launch calendar will, too, so the real constraint isn’t model capacity but enablement and safety at peak.
Exec angle: treat consumer AI fluency as pre-training for the enterprise and convert it through sanctioned access, templates, and telemetry rather than pretending it isn’t happening.
Do now: publish an AI “green/yellow/red” data one-pager, enable a sanctioned chat tier with auto-redaction + usage logs, and seed team-specific prompt packs so the path of least resistance is also the safest.

DORA 2025: AI Is an Amplifier

What happened: near-universal AI adoption inside software teams, but outcome variance tracks the maturity of internal platforms, safety nets, and feedback loops, not the model label.
Why it matters: AI scales your strengths and broadcasts your weak spots; where platform rituals are shaky, AI just makes it obvious faster.
Exec angle: invest in golden paths (SDKs, templates, eval harness, approval flow) and observability (prompt/result logging, drift checks) so improvement compounds and incidents are explainable.
Do now: run a DORA-style mirror by scoring flow, feedback, and platform maturity; then fund two bottleneck fixes rather than ten scattered pilots.

Vendor Risk Became the Outage (and the AI Link)

What happened: Think about the number of flights canceled or delayed due to one cyberattack that impacted a third-party ncidents this week that translated directly into front-line disruptions and measurable P&L hits: classic single-vendor, multi-tenant blast radius.
Why it matters: attackers scale with AI, but so do your vendors; ungoverned vendor AI workflows (prompt logs, fine-tuning, support tools) can quietly handle your data with their policies.
Exec angle: vendor management now includes AI posture—models, providers, data retention, logging, human-in-the-loop practices, and hosting regions.
Do now: add an AI clause to due diligence (ModelBOM + data flows), contract for per-tenant telemetry, and tabletop a 72-hour vendor-down scenario with a pre-approved degraded-mode play.

These data points are early signals of a deeper shift in how organisations operate under AI pressure. Seasonal spikes show us that usage patterns now bend to life rhythms, not enterprise rollouts. DORA reveals that impact flows through operating systems, not model names. And third-party outages remind us that governance is only as strong as your weakest vendor.

Together, they raise the next question in this series: what happens when AI becomes less about discrete tools and more about the invisible infrastructure behind how we learn, work, and decide?

Deep Dive

Shadow AI, Platform Reality

Deep Dive — From AI Hype to Real Productivity: Diagnosing and Upleveling Your Team Archetype

The Google DORA 2025 report unveils a new way to look at your inner organisation. AI itself won’t save your velocity or make your workflows hum, or magically solve all the issues you encounter. It will put a spotlight on what’s already working (or not) inside your org. Reading DORA 2025, I pictured an open-kitchen service. When the brigade moves in rhythm, plates fly and guests relax. When tickets pile, tools clash, and stations bleed into each other, the dining room senses it. AI is that window into the kitchen. It accelerates what’s disciplined and broadcasts what isn’t: lead time, deploy cadence, change failure rate, time to restore, reliability. Get your kitchen under control, then add AI.

With nearly every tech team using AI in some way, the divide is growing not between adopters and laggards but between teams that have the right foundations and those that don’t. So, how do you figure out where you stand, and what it takes to move up the curve?

  1. Find Your Baseline: The Team Archetype Lens DORA lays out seven distinct archetypes, from survival-mode “Foundational Challenges” to “Harmonious High Achievers” who consistently deliver value with stability and resilience. To pinpoint your team’s place on this spectrum, go beyond gut feel:

    1. Self-assessment: Run a short, honest team survey on job satisfaction, burnout, amount of meaningful work, clarity of process, and confidence in your current tools. Use clear, anonymised scoring and layer a discussion on the real blockers or enablers your team sees.

    2. Delivery Metrics: Pull your GitHub, Azure DevOps, or internal dashboards for some cold, hard numbers: deployment frequency, lead time, failure/rework rates, incident recovery, cycle time.

      Trends matter more than perfection, look for patterns of friction or flow. When you put these together, a pattern will emerge. For example, frequent deployments but lots of failed fixes may signal you’re moving fast but without enough guardrails, classic “High Impact, Low Cadence.” Long delays and high burnout? That’s likely “Constrained by Process” or “Legacy Bottleneck.”

  2. Read the Impact: Why the Archetype Matters for AI

    DORA evidence shows that the big productivity jumps from AI are being realised by teams who already have solid platforms, feedback loops, and a modicum of trust and psychological safety. Teams struggling with fundamentals? AI just adds noise, risk, and sometimes amplifies tech debt. So, before chasing the latest LLM or starting another pilot, ask: Are you likely to accelerate what’s working or what’s broken?

  3. How to move up, building a roadmap

    1. Get honest about now: Use your survey and metrics to nail your current archetype.

    2. Pick a realistic target: Don’t try to leap from “Foundational Challenges” to “Harmonious High Achiever” in six months. For many, “Stable and Methodical” or “Pragmatic Performer” is a solid, measurable next step.

    3. Design interventions: Depending on your archetype, this might mean doubling down on platform and tooling, creating space for retros and blameless postmortems, or refactoring away legacy pain points.

    4. Ship your ‘AI Golden Path’: Wrap sanctioned templates, workflows, and analytics around how AI gets used. Don’t let shadow tools or ad-hoc hacks set your pace, make the default options the safe, productive choice.

    5. Track, review, adjust: Set quarterly check-ins using the same surveys and metrics. Watch for bottlenecks, celebrate incremental progress, and course-correct as new reality (and new tools) set in.

The Bottom Line AI is a force multiplier for whatever foundations you have, good or bad. Diagnosing your archetype, aligning metrics to reality, and building an uplift roadmap are what actually moves the needle on productivity, trust, and value. Don’t let AI dazzle you out of the basics; build the scaffolding, and then let the tools scale what works.

Next Steps

What to do now?

  • Contain shadow AI: sanctioned chat with auto-redaction, domain allow-lists, and a usage leaderboard to reward good patterns.

  • Golden-path v1: template repo, prompt pack, eval rubric; publish in the developer portal with two “hello, finance/ops” examples.

  • Vendor AI due diligence: require ModelBOM + data retention + region disclosures; add real-time incident hooks and outbound egress controls.

  • DORA mirror: measure flow/feedback/platform across two teams; fund the most constraining fix immediately.

  • Run the outage drill: 72-hour vendor-down tabletop with pre-approved degraded-mode operations and comms.

That’s it for this week.

As usage spikes, reports roll in, and outages remind us of weak links, one thing is constant: AI doesn’t arrive in isolation — it lands in the systems, teams, and vendors you already rely on. The choice is whether it amplifies clarity or chaos.

Stay curious, stay informed, and keep pushing the conversation forward.

Until next week, thanks for reading, and let’s navigate this evolving AI landscape together.