Stop Training People on AI. Start Training Them on Their Jobs
So let me guess. Your organization did an AI 101 training.
Maybe it was an hour. Maybe there were sandwiches. Someone showed a few impressive demos - a website generated in thirty seconds, a marketing email drafted in ten. Everyone clapped. Someone in the back said "game changer". The facilitator left. People went back to their desks.
And then absolutely nothing changed.
If this sounds familiar, congratulations - you're in the majority. And the problem isn't that your people are slow, resistant, or "not technical enough". The problem is that you treated AI like a software rollout from 2012 and expected behaviour change to come out the other end.
It didn't. It won't. And here's why.
You're Teaching Features When you Should be Teaching Language
Traditional tech training assumes you're learning buttons and steps. Click here. Then here. Then export. Done forever.
AI doesn't work that way. AI is a language. It requires back-and-forth. It requires iteration. It requires understanding context, judging output, knowing when to push back and when to accept. None of that is learnable in a single session, no matter how good the facilitator is.
Jennifer Hufnagel, an AI educator and consultant I spoke with on my podcast-to-book project for my upcoming book “Swan Dive Backwards”, has spent three years learning this the hard way. She started where many trainers starts - doing one-off ChatGPT workshops. They were fine. People liked them. And then adoption waned the moment the session ended.
Her conclusion, after years of iterating and training over 4,000 people: AI fluency isn't attained in a one-hour session or even a three-hour workshop. This technology changes constantly. It takes practice and play and going down rabbit holes. There are billions of use cases, and not one person can learn it alone.
That last part is worth spotlighting. Not one person can learn it alone. Which means the model of "bring in an expert, do a one-off session, check the box" is structurally incapable of producing what you need.
The Two Groups you Don't Want
Here's what one-off training produces. Two groups.
Group A leaves overconfident. They start shipping AI output without editing it. The inputs are lazy, so the outputs are generic, and within a few weeks the novelty wears off. Interest wanes. They go back to doing things the old way — just with a vague sense that AI didn't work for them.
Group B leaves intimidated. The demos were impressive but abstract. They have no idea how to apply any of it to their actual Tuesday. So they file AI under "things I should probably learn eventually" and quietly avoid it.
Neither of these groups is what you need. What you need is a third thing entirely: people with clear baseline literacy, safe practices, and workflows that connect to their real work.
And that brings us to the shift.
Train for the Job, Not the Tool
The organizations that are actually getting traction with AI adoption have made one crucial move: they stopped training people on AI and started training people on their jobs - with AI.
The difference sounds subtle. It isn't.
"How to use ChatGPT" is a tool training. It produces general knowledge and no specific behavior change.
"How to write a sharper discovery call prep in half the time" is an AI-forward job training. It produces immediate, repeatable value inside someone's actual week.
For salespeople, that might mean using AI to prepare better questions before a call, or to turn messy meeting notes into clean follow-ups. For HR, it might look like drafting candidate communications or policy documents. For marketing, it could be briefs, revisions, research, or positioning work.
That's where adoption actually happens - inside people's days, not inside a training room.
Shona Boyd, a product manager I spoke with on the podcast, was blunt about the floor that needs to exist first: don't just hand out logins to tools with no written policy and no clarity on what's acceptable to put into those systems. Access without guidelines isn't enablement. It's a risk with a login screen.
The Gym Analogy (Because it's Annoyingly Accurate)
One-off AI training fails for the same reason that a single gym session doesn't make you fit.
Day one, you're motivated. Day two, you're sore and wondering why you did this. Day three, you sleep in.
The only thing that changes the outcome is a system. A specific time you go. An accountability partner. A personal trainer. A plan that builds gradually. You don't learn to deadlift in a lunch hour, and you don't learn to integrate AI into a ten-step client onboarding workflow in one either.
This is change management dressed in a tech costume. And most organizations are skipping the change management entirely. They're jumping straight from "we bought licenses" to "we are AI enabled". As if the act of purchasing access is the same as building capability. It isn't. That's like buying a gym membership and telling your doctor you exercise.
The Five-Step Loop That Actually Stick
So what does good AI training look like in practice? Not a curriculum. A loop.
Here's the simplest version I know. Do it every two to four weeks, with real work, in real time.
Step one: Bring one real task. Not a hypothetical. Not "let's pretend we're writing a proposal", An actual task from your actual week. This matters more than anything else.
Step two: Prompt once. Get the ugly first draft. Accept that it will be ugly. That's the point.
Step three: Critique it like an editor. What's wrong? What's missing? What's risky? What doesn't sound like you or your team? This is the skill most people skip, and it's the most important one.
Step four: Re-prompt. Force the improvement. Tell the AI what you didn't like. Give it more context. Push it harder. This is where people learn that AI is a conversation, not a vending machine.
Step five: Share one before-and-after with your team. This is how knowledge spreads. This is how people see what's possible without having to figure it all out alone.
Bring. Prompt. Critique. Re-prompt. Share. That's the loop. Five steps. Thirty minutes. Not a one-time event - a rhythm.
Jennifer Hufnagel calls it "practice and play". I call it building the muscle. Either way, the principle is the same: this isn't something you install. It's something you practice.
Communities of Practice (And Why your Champions are Burning Out)
The loop works. But it works better inside a container.
Jennifer described something she's built in her own community - an in-person AI meetup she started because she was literally the only woman on Vancouver Island talking about this publicly. She called it AI Chats and Bites. Monthly lunches where people show up, share what they're working on, help each other through the stuck points.
Inside organizations, this looks like a community of practice. Thirty minutes a week of open office hours. A shared channel for AI wins and questions. Two champions per team - not because they're experts, but because they're curious catalysts willing to learn out loud.
But here's the warning that comes with the champion model, and I cannot stress this enough: if you don't adjust expectations, your best champions will burn out and go back into hiding.
I see this pattern often. An organization identifies its keenest AI people. Gives them the unofficial champion role. And then quietly expects them to be the internal help desk, the trainer, the policy advisor, and the evangelist - all while still doing their actual job. No adjusted workload. No compensation. No recognition beyond a vague "thanks for being so helpful".
That's not a champion program. That's exploitation with a friendly title.
If your champions are carrying your AI adoption, compensate them. Dedicate 10 to 20 percent of their time to it formally. Update their job descriptions. Give them access to leadership so their insights inform strategy. And for the love of everything - do not let them become unpaid internal consultants.
Champions are powerful. But only when they're supported, connected, and recognized. Otherwise, you've just created one more way for your most motivated people to burn out.
The Equity Question Nobody's Asking
One more thing, and this one matters.
When you look at who your AI champions are, what do you see?
Often it's the loudest voices. The people closest to power. The usual suspects. But there may be quiet experimenters building brilliant systems who don't feel safe enough to talk about it. Introverts who've figured out workflows that would transform a department - if anyone knew about them.
Could it be women? Women are adopting AI at 25 percent lower rates than men. Not because they're less capable, but because they're learning it at 11pm after the kids are asleep, with no strategic support and a nagging worry about whether using AI makes them an imposter.
Or maybe it’s people of color who've historically not found it easy to stand out because the culture doesn't support it.
If your champion identification process is whoever volunteers loudly, your champion pool is probably less diverse than you think. And that means your AI adoption is being shaped by a narrow slice of your organization while the rest watch, wait, and quietly fall further behind.
This is the equity conversation that needs to be woven into every AI training strategy. Not as an afterthought. As a design principle.
****
Susan Diaz is the host of AI Literacy for Entrepreneurs and the author of the forthcoming book 'Swan Dive Backwards'. She runs AI Power Circle, an AI implementation mastermind for founder-led businesses ready to stop producing more and start producing effectively. If that's where you are, find Susan Diaz on LinkedIn to see if this is a fit.
PS: Want 10 ready-to-run prompts to uncover audience insight, sharpen your offers, and create smarter marketing content? Get our AI deep research prompt pack.