- On About AI
- Posts
- Provenance Is the Product Now: Google’s ‘Nano-Banana’, OpenAI’s Crisis Plan, and GPAI Day One in the EU
Provenance Is the Product Now: Google’s ‘Nano-Banana’, OpenAI’s Crisis Plan, and GPAI Day One in the EU
Ship provenance (C2PA), system cards, and safety UX before you scale AI in Europe.
Provenance Is the Product Now
Google’s “Nano-Banana,” OpenAI’s crisis plan, and GPAI Day One in the EU
TL;DR
Google’s Gemini 2.5 Flash Image (aka “nano-banana”) makes multi-image blends and consistent likeness edits routine—great for brand avatars and catalogs, but it pushes provenance from “nice-to-have” to core UX.
The EU’s GPAI Code of Practice landed (Jul 10) and GPAI obligations apply from Aug 2, 2025; enforcement escalates through Aug 2, 2026. Treat system cards and documentation as product components, not paperwork.
OpenAI is adding parental controls and crisis-response features and acknowledged safety can degrade in long chats—design escalation paths into enterprise chat.
Compute is booming but changing: NVIDIA’s Q2 FY26—$46.7B revenue; $41.1B data center; model inference OPEX and energy should dominate your 2026 TCO calculus.
Field Notes
(EU lens, what changed)
1) Google’s image editor crosses a line—toward controllable identity.
Gemini 2.5 Flash Image (the “nano-banana” everyone’s riffing on) now does targeted edits, multi-image fusion, and—critically—character/style consistency. That consistency is the unlock for brand avatars, product try-ons, and templated campaigns. It also amplifies misattribution risk, which makes provenance labeling a first-class feature.
2) GPAI obligations are live.
The Commission’s GPAI Code of Practice is out; rules for GPAI apply from Aug 2, 2025, with the AI Office gaining full enforcement powers from Aug 2, 2026. Use the Code’s model-documentation form to standardize your system cards and cut audit friction. 3) Safety UX is table-stakes.
OpenAI’s update commits to parental controls, emergency contacts, and better crisis handling—plus a candid note that safety can drift in long sessions. If your app runs prolonged chats (agents, support), build time-based guardrails and escalation into the product.
4) Compute supercycle continues—watch the OPEX.
NVIDIA posted $46.7B revenue; $41.1B data-center. Great for capacity, but your real constraint is inference OPEX and energy/cooling over contract life. Lock TCO before multi-year commitments.
Deep Dive
If edits look real, provenance becomes the UX
The “nano-banana” moment isn’t about prettier diffusion; it’s about controllability. Gemini’s likeness consistency means a face, logo, or style survives across many edits and composites. For EU enterprises, that’s rocket fuel for brand avatars, product imagery, and synthetic data—but it also narrows the visual gap between captured and composed. When end-users can’t tell how something was made, the UI must. That’s provenance.
At the same time, the regulatory floor just rose. The GPAI Code of Practice and AI Act push providers and deployers toward documentation, transparency, and safety. You don’t need to become a policy shop; you need to reframe compliance as shippable UX + engineering work. Build three things: system cards, content credentials, and safety flows.
1) System cards as product
The Code ships a template-like model documentation form; adapt it to your internal and vendor models. Treat it like API docs for risk: purpose, training/eval notes, limits, data handling, and incident contacts. Publishing or keeping these on file accelerates procurement and makes audits predictable. Map each model to a business capability and a “definition of safe.”
2) Content credentials (C2PA) as trust rails
Adopt C2PA/Content Credentials so assets carry verifiable provenance across tools and platforms. This is the only broadly supported, open standard with serious momentum (Adobe/Google/Microsoft/Meta/OpenAI on the steering committee; integration across editors, some cameras, and growing parts of the gen-AI stack). It’s not a silver bullet—labels can be stripped or ignored—but it’s good enough to raise trust and meet emerging disclosure expectations. Add plain-language labels in your UI (“Generated with Gemini; details in Content Credentials”).
3) Safety UX for long sessions
OpenAI’s own note that guardrails can drift over time in a chat is a design requirement: instrument session length and escalate in sensitive domains. Add in-product help panels (hotlines, human handoff), parent/guardian controls if minors might use it, and rate-limit risky patterns. Enterprise chat and assistant products should log the when (not just the what) of prompts/outputs to monitor drift.
What changes operationally
You’ll ship provenance, documentation, and safety before adding fancy prompts.
Your legal/privacy teams stop blocking because evidence exists (system cards, credentials, incident playbooks).
Your brand team can run avatar-driven campaigns with audit-friendly labels.
Your TCO model emphasizes inference (requests × tokens × kWh) and cooling—and those costs are now part of product decisions, not procurement afterthoughts.
Bottom line: provenance isn’t a sidecar—it’s the steering wheel.