- On About AI
- Posts
- Teens, Tools, and the Next Decade: What OpenAI & Anthropic’s new data mean for work, schools, and our kids
Teens, Tools, and the Next Decade: What OpenAI & Anthropic’s new data mean for work, schools, and our kids
AI adoption is changing how kids learn, work, and make decisions—with safety and judgment the new frontiers.
Your teenager is more likely to ask ChatGPT for advice than use it for homework. This week's data from OpenAI reveals that 70% of AI conversations happen outside work. People are using AI to plan dinners, resolve arguments, and make life decisions. Meanwhile, Anthropic reports that 39% of users now delegate entire tasks to AI, up from 27% just months ago. We're not just adopting AI tools; we're rewiring how humans make choices.
OpenAI published the largest consumer usage study of ChatGPT to date, showing broadening demographics and a 70/30 non-work to work split. Anthropic released a fresh Economic Index: coding still leads, but education and science usage are gaining share. Then OpenAI took a hard public stance: for teens, safety trumps privacy and freedom, with age prediction and parental controls incoming. Layer on Yampolskiy’s now-viral warnings of steep job displacement, and a blunt question emerges for boards, ministers, and parents: What will we teach the kids who’ll compete with their own copilots?
TL;DR
OpenAI usage is rapidly growing and becoming a broadly accessible global tool. Study indicates adoption surging outside early niches; 70% of consumer usage is non-work, and growth is fastest in lower-income countries, 4x faster growth in lowest income countries compared to highest income country. Gender gaps have narrowed, with a representation more close to adult population distribution.
Anthropic sees a change in usage, claude code at 36%; but lower usage related to code corrections. Education + science usage is rising; “directive” (delegation) conversations are up sharply with more tasks being handed off end-to-end. Especially in API usage.
OpenAI will by default offer restricted version to minors under-18. They will predict age, default under uncertainty to a teen mode, block certain categories, and stand up parental controls, explicitly putting safety ahead of privacy/freedom for minors.
Strategic takeaway: If kids learn with AI by default, systems must teach AI judgment (not just prompts), source attribution, and safety norms now, well before labor markets compress around automation-heavy roles. UNESCO/OECD already have workable guidance
The Brief
OpenAI: How people actually use ChatGPT (largest study so far)
What happened: A privacy-preserving analysis of 1.5M conversations reports ~70% non-work, ~30% work; “Asking” as advice/decision support is ~49% of messages, highlighting the value people place on AI for guidance and improved judgment; adoption in lower-income countries is growing 4 times faster** than in higher-income. The gender usage gap has closed rapidly. While 80% of early users had masculine names, female-associated names now account for over half (52%) of users by mid-2025. This reflects a shift from historical male-dominated tech adoption and signals greater inclusion. OpenAI
Why it matters: Consumer surplus and skills discovery are increasingly realised outside payroll and traditional workplace channels. Procurement and value metrics will undercount the true impact, especially as a more diverse user base, including women and residents of lower-income countries, adopts AI in everyday life, multiplying productivity and creative potential in previously underrepresented groups
Exec angle: Treat widespread consumer use as pre-training for workforce enablement. With a shrinking gender gap and rapid uptake beyond high-income markets, it is crucial to design interventions that harness “home learnings” for professional and organisational growth, embedding patterns that are secure, governed, and inclusive of all demographic groups
Do now: Ship approved playbooks (comms, product, finance, R&D) and decision-support guardrails (retrieve, cite, compare) inside your AI portal.
Anthropic Economic Index: usage mix is tilting toward learning & science
What happened: The Anthropic Economic Index, based on large-scale usage data, shows shifts in how people interact with AI: educational use rose from ~9.3% to 12.4%, science from 6.3% to 7.2%, and coding remains the largest category at ~36%. “Directive” (delegate/full automation) conversations jumped from 27% to 39%, reflecting rising trust in AI to own task outcomes end to end. The geographic distribution of adoption remains highly uneven and closely tied to income. Regions with higher per-capita GDP see markedly higher and more diverse usage. Lower-income countries often concentrate on core technical tasks like coding.
Why it matters: The increase in delegation signals that AI is moving beyond simple assistance, task displacement is now appearing in program creation, not just debugging, with workflows shifting from supervising AI to letting it “own the outcome”. This accelerates changes in work itself, raising the risk of deepening economic and skills inequalities if advanced capabilities and productivity gains remain concentrated in high-income regions and industries, echoing past technological revolutions
Exec angle: To realise and measure AI’s true ROI, align returns to tangible units of work (issues resolved, drafts approved) rather than time saved. As AI adoption rates and its mix of usages vary greatly by geography and function, focusing on task-level output helps identify where delegation truly adds enterprise value and where human oversight remains essential.
Do now: Instrument “delegate-worthy” workflows and track their effectiveness using pass rates (first-try acceptance, no required human edits) by function. Since regions and roles differ in readiness for full automation, organisations should ensure measurement and governance reflect this divergence, helping pinpoint both where AI can deliver autonomy, and where equity gaps or risks of marginalization may be emerging
OpenAI on teen safety: an explicit re-weighting of principles
What happened: OpenAI is explicitly reprioritising its AI principles to place safety above privacy and freedom for minors. The company is rolling out an age-prediction system that defaults uncertain cases to an “under-18” experience with stricter protections. New parental controls will allow parents to link accounts, manage usage features, restrict access times, and receive alerts if teens show signs of acute distress or self-harm risk
Why it matters: This formalises that AI conversations are personally sensitive data deserving privileged protection, akin to medical or legal confidentiality. It necessitates protective tradeoffs, even at the expense of some privacy and freedom. These choices set important precedents for responsible AI governance and society’s evolving trust framework around emerging technology, especially given recent adverse events linked to AI use by youth.
Exec angle: organisations serving youth or education sectors must urgently tailor their AI models, logging, and escalation workflows with age-aware variants that respect these new safety priorities. This includes implementing differentiated policies, detection, and human-in-the-loop escalation tailored specifically for minors. Work that is critical to regulatory compliance and reputational stewardship
Do now: Map all user touch points to age-tiered safety and privacy policies, segregate telemetry data from teens for targeted compliance and monitoring, and practice fast, reliable human-in-the-loop escalation drills for distress or misuse events. This proactive posture is key to protecting minors and ensuring alignment with emerging regulatory and ethical demands
Education guidance exists—use it
What happened: UNESCO’s Generative AI (GenAI) guidance and the OECD’s AI literacy frameworks have articulated clear principles emphasising human-centred design, privacy, equity, and AI literacy as critical foundations. These frameworks provide actionable, internationally informed policy recommendations and curriculum models for integrating AI technologies responsibly into education.
Why it matters: organisations and policymakers no longer need to invent AI educational policy from scratch; instead, the urgent task is to operationalise existing standards to ensure equity, privacy, and human agency in AI use. This foundational work addresses risks such as misinformation, bias, and uneven access, setting a durable base for AI literacy that prepares learners for ethical and effective AI engagement.
Exec angle: Shift strategic focus from binary “ban versus allow” debates to comprehensive curriculum redesign emphasizing judgment, verification, and collaborative use of AI. Embedding AI literacy across subjects empowers students with durable skills to critically evaluate AI outputs and collaborate productively with AI systems in diverse contexts.
Do now: Launch a focused 3-month AI-literacy sprint including professional development for teachers, updated student learning modules, and redesigned assessment practices, benchmarking outcomes against UNESCO/OECD frameworks. Rapidly equipping educators and learners with these competencies is essential for shaping responsible AI adoption at scale and fostering lifelong learning skills in a digitally transformed world.
This week’s infra backdrop: “Stargate UK” and a GPU flood
What happened: OpenAI announced Stargate UK, a partnership with NVIDIA and Nscale to establish sovereign AI infrastructure across British data centers. Initial GPU offtake is planned for 8,000 GPUs in early 2026, with potential scaling up to 31,000 GPUs. This infrastructure rollout is part of a broader UK-US tech collaboration deploying around 120,000 GPUs across multiple UK sites, including Cobalt Park in the newly designated AI Growth Zone in Northeast England.
Why it matters: Sovereign compute infrastructure empowers regulated sectors such as finance, healthcare, education, and government to operate AI workloads within jurisdictional boundaries that comply with data residency, privacy, and security requirements. This jurisdictional advantage is critical for compliance-sensitive workloads and strengthens national competitiveness by enabling local innovation and secure AI adoption without reliance on foreign cloud providers
Exec angle: For organisations in finance, health, and education sectors in the UK and EU, strategic planning must include in-region model hosting and data-residency-aware AI copilots. Embedding AI capabilities locally ensures compliance with regulatory frameworks and mitigates risks related to data sovereignty while maintaining operational efficiency and AI performance.
Do now: Incorporate sovereign compute requirements into 2026 RFPs and pilot local large language model (LLM) inference for sensitive or regulated workloads. Early investments and pilots will position organisations for smoother compliance and competitive advantage as data governance rules tighten and local infrastructure scales.
These data points are early signals of a fundamental shift in how humans learn, work, and make decisions. This pattern raises the question that's been building throughout this series: what happens when an entire generation grows up thinking alongside AI before they enter a job market potentially transformed by it?
Deep Dive
Kids, Copilots, and the Roman Question
Roman Yampolskiy argues that by 2027–2030, automation could wipe out most current jobs, leaving a narrow band of human-preferred roles. Another voice, the World Economic Forum (Future of Jobs Report) indicate that 92 million jobs are projected to be displace by 2030, with 170 million new ones emerging. Whether we take that as a central case or a stress test, the usage data this week fuels the debate: People trust that AI is capable; people are already delegating whole tasks and students are already learning with AI. The risk isn’t just “automation kills jobs.” It’s that systems fail to teach judgment to the generation who will compete with their own tools.
Three inconvenient truths emerge from the data:
Delegation is normalising. Anthropic’s increase in directive use signals a shift, People are increasingly asking AI not just for advice, but to fully complete tasks. This shift means mistakes could go unnoticed as AI executes work with less oversight, raising the stakes for unchecked errors.
Non-work use compounds skills quietly. Most AI value today comes from everyday personal use (planning, writing, decision support) where no manager tracks skill growth. Future workers will arrive with varying levels of “AI fluency” shaped by their home and school.
Teens require special attention. AI chats can contain deeply personal information, they require protections similar to medical privacy. Age-sensitive user experiences, strict data handling for minors, and responsive human intervention are no longer optional, they’re essential.
What should schools teach?
Not just how to use prompts but a deeper, critical AI literacy: teach epistemic hygiene (how to evaluate claims through evidence and counterfactual thinking); embed source-aware workflows that require retrieving information before reasoning and citing as you go; teach human override skills, knowing when to stop, question, or intervene; and nurture an understanding of economic complementarity, i.e., how to effectively design tasks for AI agents and audit their outputs. UNESCO and OECD frameworks provide a robust, internationally informed blueprint ready to implement immediately.
For executives:
The best defense against AI risks highlighted by thinkers like Roman Yampolskiy is not slogans about "responsible AI" but governed delegation. This means building instrumented pipelines where every AI delegation is tracked—who delegated, to which model, under what context, and with what success criteria. This governance must extend beyond the organisation into partner schools and apprenticeships to ensure accountability, transparency, and teachability of AI’s role in work and learning.
Next Steps
What to do now?
Schools & Ministries: Adopt UNESCO/OECD AI literacy frameworks; implement teen-safe policies; redesign curricula to teach critical AI judgment and safety.
Enterprises: Establish clear acceptance criteria for AI-delegated tasks; build portals for AI pattern reuse; partner with schools for AI skill pathways.
Parents: Set clear household AI use rules; apply parental controls; discuss escalation protocols.
Executives: Track task delegation telemetry; define judgment-heavy roles; plan sovereign compute for compliance.
That’s it for this week.
As AI continues to reshape every corner of work and education, the questions we ask, and how we answer them will define the next decade. Whether it’s preparing our kids to thrive alongside their AI copilots or equipping organisations with responsible, governed AI practices, it's time for for clarity and action.
Stay curious, stay informed, and keep pushing the conversation forward.
Until next week, thanks for reading, and let’s navigate this evolving AI landscape together.