Trusted by over 68k marketers (LinkedIn & TikTok)

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

© Copyright 2025. All rights reserved

Your Board Just Made AI Adoption a KPI. Now What?

By
Oren Greenberg
February 23, 2026

"One of our big pushes is AI adoption. We are all being measured on it. So using an AI built tool vs a [insert large well known brand] is 💯 in the bullseye for us."

That's from a revenue leader at a PE-backed B2B company — not some AI startup, just a regular business with quarterly targets and a board that's suddenly very interested in how the team uses AI.

If you're a CRO, CMO, or CEO right now, you probably recognise yourself in that quote. AI adoption has quietly gone from "innovation initiative" to line item on the performance review. The board isn't asking if anymore — they're asking how much and how fast (and they want numbers, not vibes).

The Mandate Is Real

Shopify made it explicit: reflexive AI usage is a baseline expectation. Prove AI can't do the job before you're allowed to hire. They even built an internal leaderboard tracking token spend (because of course they did).

Shopify isn't an outlier — they're just the ones who said it out loud. 82% of enterprise leaders now use generative AI weekly. 46% daily. This isn't early adopter territory anymore.

But 95% of enterprise AI pilots deliver no measurable ROI.

Everyone's adopting. Almost nobody's getting results. That gap is where everything interesting is happening.

The Mandate Gap

The Buying Landscape

The mandate isn't just changing how folk operate internally. It's rewiring procurement.

AI-native startups now capture nearly $2 for every $1 earned by incumbents. Klarna publicly ditched Salesforce and Workday for an AI-first stack. Copilot's NPS dropped 48 points after developers tried Cursor — folk using the incumbent, trying the alternative, and deciding they're never going back (which should terrify every legacy vendor reading this).

That quote I opened with? That's the buying behaviour of someone whose AI adoption KPI means choosing the AI-native option is the performance metric. The tool selection itself becomes evidence of compliance.

Your customers are increasingly making decisions through this same lens, whether you've clocked it or not.

The Capability Gap

Most revenue leaders I speak to are stuck in one of two spots.

Either they've bought some AI tools and are signalling adoption whilst nothing's fundamentally changed (the corporate equivalent of buying running shoes and calling yourself a marathon runner). Or they're personally finding AI useful but can't drag the rest of the organisation forward.

The problem is heterogeneity. Teams have wildly different starting points. You've got one person who's been deep in AI for a year building automations. You've got another who tried ChatGPT once and reckons it's "alright." You've got a third who's quietly hoping the whole thing blows over.

And most folk don't have enough autonomy in their roles to dedicate real time to upskilling (which is the bit that never gets discussed). "Go figure out AI" sits on the to-do list somewhere between "update the CRM" and "review Q3 forecasts." It sinks.

The interesting nuance: purchasing from specialised vendors succeeds 67% of the time, versus roughly a third for internal builds. External structure works. Hoping your team figures it out on their own... doesn't.

What AI Fluency Looks Like

Most folk reckon "AI adoption" means "use AI tools." Technically true, practically useless (a bit like saying "digital transformation" means "use computers").

I run structured training programmes for B2B revenue teams, and the maturity model I use has eight stages. Each builds on the last, and honestly most teams are further behind than they reckon.

Stage 1: Prompt Engineering Foundations. Structured prompting, chain-of-thought reasoning, consistent outputs. Hands-on with Gemini Pro. Most folk are stuck here without knowing it.

Stage 2: Custom GPTs, Gems & NotebookLM. Persistent AI assistants with brand voice and channel context. NotebookLM for source-grounded research.

Stage 3: AI Image & Video Creation. On-brand visuals and short-form video with ElevenLabs and Nanobanana. Style guides, consistency techniques, social media asset workflows.

Stage 4: Workflow Automation — Foundations. Core automation concepts: triggers, actions, integrations. Building your first multi-step workflows in Gumloop and n8n (and where most people skip ahead and wonder why everything breaks).

Stage 5: Workflow Automation — Content & Social Pipelines. Content production and distribution automations. Social monitoring, content repurposing pipelines, scheduled posting workflows.

Stage 6: Workflow Automation — Advanced & Cross-Functional. Conditional logic, API integrations, multi-tool chains. Cross-functional workflows connecting marketing, ops, and data.

Stage 7: Cursor & Claude Code — Foundations. Setting up Cursor, writing natural language instructions to build functional tools. Internal dashboards, trackers, calculators.

Stage 8: Cursor & Claude Code — Building Real Tools. Iterating on builds, debugging with AI, working with data inputs. When to vibe-code vs. use existing tools.

Not everyone needs to reach Stage 8. But everyone needs to be progressing somewhere on this curve — and the real measure of AI adoption isn't how many licences you've activated, it's where your people actually sit.

Most companies I see are stuck between Stage 1 and 2 whilst telling the board they're "adopting AI." Mad, right?

AI Fluency Maturity Model

Mandate Without Infrastructure

The pattern is boringly predictable at this point. Company issues the AI mandate, buys some tools, runs a lunch-and-learn, sends round a few links, and expects folk to figure it out.

Six months later a handful of self-motivated people have made real progress. Everyone else is exactly where they started. Leadership's frustrated. The KPI isn't moving. Same old story...

The problem is structural. The companies that actually move up the maturity curve treat AI fluency like infrastructure, not an afterthought. Four things separate them:

Protected time. Not "find time in your schedule." Actual blocked time for building AI skills. If it's not on the calendar, it doesn't exist. Telling folk to upskill in their spare time is telling them not to upskill.

Mandatory assignments tied to real work. Not theoretical exercises. Assignments that take a real workflow from their actual job and apply AI to it. The learning has to be immediately practical or it won't stick.

External accountability. This is the one most companies miss. Peer accountability dissolves when everyone's equally busy and equally unsure (spoiler: they are). You need someone external whose actual job is to hold the team accountable, review outputs, and push folk past sticking points. Think gym membership versus personal trainer — one of those actually works.

Leadership visibility. If the CEO never looks at what the team's building with AI, the team gets the message: this doesn't actually matter. Review the automations people build. Celebrate the wins publicly. Show folk it actually matters.

The Programme

That maturity model isn't just a framework — there's a structured programme behind it. Eight sessions over 16 weeks, one every fortnight, following the same arc from literacy through to building.

The 16-Week Programme

The fortnightly cadence matters more than people reckon. Weekly is too fast — folk don't have time to implement between sessions, so they fall behind and stop engaging. Monthly is too slow — momentum dies and you lose the compounding effect. Fortnightly hits the sweet spot.

Every session has a mandatory assignment tied to your actual work. Someone on the sales team might apply what they've learned to their outbound research workflow. Someone in marketing might start building a content pipeline that turns one piece into six. The session gives you the skills and the framework — the assignment is where you go away and apply it to your own problems.

The Real Question

If you're a revenue leader being measured on AI adoption right now, the question isn't whether your team should be adopting AI. That's been decided for you.

The question is whether you're building the conditions for real fluency, or hoping folk sort it out on their own.

And we all know how that tends to end...

If you want the full programme outline — session-by-session breakdown, how it maps to your team's actual workflows, pricing — drop me an email. Happy to send it over.

Article by

Oren Greenberg

A fractional CMO who specialises in turning marketing chaos into strategic success. Featured in over 110 marketing publications, including Open view partners, Forbes, Econsultancy, and Hubspot's blogs. You can follow here on LinkedIn.

Spread the word

Sign up to my newsletter

Get my 6-part free diagnostic email series.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Get my 6-part free diagnostic email series.

Send it Over