In partnership with

Why AI Isn’t Replacing Affiliate Marketing After All

“AI will make affiliate marketing irrelevant.”

Our new research shows the opposite.

Levanta surveyed 1,000 US consumers to understand how AI is influencing the buying journey. The findings reveal a clear pattern: shoppers use AI tools to explore options, but they continue to rely on human-driven content before making a purchase.

Here is what the data shows:

  • Less than 10% of shoppers click AI-recommended links

  • Nearly 87% discover products on social platforms or blogs before purchasing on marketplaces

  • Review sites rank higher in trust than AI assistants

If 2025 was the year AI became an operational reality, then 2026 is the year organizations will be judged on how well they run it. The honeymoon phase of "look what it can do" is over. We are now in the "how do we keep it from breaking at 3 AM?" phase. This first edition of the year is a hard pivot: away from retrospectives and toward the mechanics of scale.

Part I — The January AI Playbook (The 30-Day Detox)

1) The 2025 Audit: Freeze and Prune

Breadth kills maturity. Most organizations entered the New Year bloated with half-baked POCs.

  • The Action: Pick two workflows to move to production grade. Freeze every other experiment.

  • The Filter: If you cannot describe the specific failure mode of a workflow and its associated cost to the business, you aren't ready to automate it.

  • Good candidates: Support triage, CRM hygiene, or security alert enrichment.

2) Implement "Machine RBAC"

Stop treating agents like digital assistants; treat them like service accounts with Financial Power of Attorney.

  • Identity: Who/what is acting?

  • Permissions: What can it change?

  • Spending Limits: If an agent can call tools, it can spend money.

  • January Test: Can you audit why an agent made a specific API call that cost $5.00? If not, revoke its tool access until you can.

3) Install the "Human-to-AI" Protocol

The biggest failure point in 2026 will be ambiguous intent.

  • The Rule: Stop letting your systems guess.

  • The Fix: If a user prompt is ambiguous, the system must be hard-coded to force a clarification step rather than making a "high-confidence" assumption. It is better to be a helpful nudge than a confident error.

4) No Eval → No Release

Benchmarks are for marketing; Evals are for engineering. In January, every system should have:

  • A "Golden Dataset" (50+ curated examples of "perfect" outcomes).

  • Regression tests from last year’s failures.

  • The Gate: Treat evals like CI/CD. If the system's performance on your golden set drops by even 1%, the deployment is automatically blocked.

5) Shift to "Cost per Outcome"

In 2026, "cost per token" is a vanity metric.

  • Track this: What is the total cost (including retries and high-reasoning model calls) to successfully resolve a ticket or prepare a deal?

  • The Guardrail: If a system can fail expensively (e.g., an agent getting stuck in a reasoning loop), it eventually will. Set hard caps on "retry amplification" now.

Part II — Six Bets for 2026 (Signals & Red Flags)

Bet #1 — Agent identity becomes a first-class control plane

Why: Agents acting without attribution will become a massive governance liability.

  • Leading Indicator: Platforms shipping native agent identity and scoped permissions.

  • 🚩 Red Flag: Teams still using "shared" API keys or generic "AI User" seats for multiple autonomous workflows.

Bet #2 — Evals become standard procurement criteria

Why: Enterprises will stop buying "black box" promises and start demanding proof.

  • Leading Indicator: RFPs asking for task-level eval results and failure mode documentation.

  • 🚩 Red Flag: A vendor claiming "99% accuracy" but refusing to share the dataset or methodology used to reach it.

Bet #3 — Sovereignty shifts to Architecture Reviews

Why: Geopolitics and outages make "legal assurances" irrelevant compared to technical dependency.

  • Leading Indicator: Infrastructure teams mapping failover paths to secondary model providers.

  • 🚩 Red Flag: Critical business processes that rely on a single API with no "warm standby" or local fallback.

Bet #4 — Prompt Injection is treated as standard AppSec

Why: Untrusted input controlling code execution is a known vulnerability, not a "quirk."

  • Leading Indicator: Prompt injection showing up in formal pen tests and threat models.

  • 🚩 Red Flag: Relying solely on "system prompts" as a security layer.

Bet #5 — Inference Economics reaches the Boardroom

Why: Agentic systems scale usage non-linearly. A small bug can lead to a massive bill.

  • Leading Indicator: CFOs tracking AI spend against specific business unit KPIs.

  • 🚩 Red Flag: Engineering leads who can’t explain the ROI of moving from a 7B model to a 70B model for a specific task.

Bet #6 — The winners are the "Calmest Systems"

Why: Brilliance is a feature; reliability is the product. Users trust systems that are predictable, not just smart.

  • Leading Indicator: Shift in focus from "hallucination rates" to "system uptime and recovery speed."

  • 🚩 Red Flag: Systems that prioritize "creative" or "clever" responses over consistent, structured output.

The 2026 Anti-Goal

Do not build a "Center of Excellence." Build a "Center of Operations." 2025 was for talking about what is excellent. 2026 is for making sure the systems actually work when no one is watching.

Start small. Design for failure. Operate for trust. That’s the playbook.

— João

That’s it for this week.

That’s it for this week.
Thanks for reading throughout the year. We’ll start 2026 with a clear playbook and fewer assumptions.

Enjoy the holidays.

Until next week, thanks for reading OnAbout.AI.

Keep Reading