Your Company Has Two AI Universes. Leadership Only Sees One.
Here's a scene that's playing out in offices right now. Maybe yours.
A strategist is building a deck. Or wrestling an Excel sheet into something that resembles a story. They hit a block. The kind that makes you stare at the screen and think, "We have AI. Why is this so hard?"
So they open the approved internal assistant. The safe one. The one blessed by IT and governance and good intentions. They type in their question.
The answer comes back.
And it's... fine. Safe. Generic. Missing the micro-specifics of the market, the internal politics, the constraints that actually shape decisions. It's the AI equivalent of getting directions from someone who's never been to your neighbourhood. Technically correct. Practically useless.
And that's the moment the real story begins. Because the strategist knows the answer exists elsewhere. They know a more capable tool will solve this in ten minutes. They also know they're not supposed to use it.
So they do what smart people do. They strip out the names. Remove the obvious identifiers. Take out anything that feels incriminating. Then they open a personal account - on their phone, or a personal laptop - and they get the job done.
That's how shadow AI happens. From friction.
The Two Universes
Most organizations now have two AI realities running at the same time.
There's the official universe: approved enterprise tools, visible to leadership, documented, sanctioned. And there's the unofficial universe: personal subscriptions, private devices, capable tools, where the real problems get solved quickly - and completely outside your guardrails.
Leadership sees one. Work happens in both.
If you think this isn't happening in your organization, I'd gently push back. Even companies with blanket "no AI" policies are living this reality. People aren't breaking the rules because they're reckless. They're breaking them because they're busy, intelligent, and surrounded by utility. The answer to a hard problem exists on a device in their pocket. A personal AI subscription costs about twenty-five dollars a month - the price of a fancy set of pens. Why wouldn't they?
Shona Boyd, a product manager in SaaS who's been working to make AI more approachable, put it directly when I spoke with her. She'd join meetings and some random AI note-taker would appear on the call. Everyone would look around. Whose is this? Until someone chimes in: "Oh, I was just trying something out." Shona made the distinction that matters: there's a real difference between enterprise-level security and the consumer-grade tools people purchase themselves. When everyone's buying their own subscriptions, you lose visibility, you lose control, and you lose negotiating power.
And for anyone who's been around long enough to remember - we've done this exact dance before.
The "Unauthorized Site" Era (Again)
Fifteen years ago, many organizations blocked work internet for anything beyond a few approved sites. You'd click a link and get a pop-up: UNAUTHORIZED SITE. Like you were trying to access something nefarious instead of Googling "Excel formula count unique entries with multiple conditions not working."
People worked around it. They used their phones. They posted in forums. They figured it out. They always figure it out.
We are repeating that pattern right now with AI. If someone can't get what they need inside the organization's knowledge base, they will find it. They always have. AI is simply the newest path of least resistance.
There's no reason to panic. But we do have to stop pretending bans are strategies.
The Four Hidden Costs
When experimentation stays invisible, it can't compound. A win happens in a silo. Nobody else learns. The organization never benefits. That's the quiet cost that never shows up on a line item. It's not the software fees. It's the opportunity cost - creative energy that should be building momentum but never becomes a system.
There are four specific ways this gets expensive.
The first is duplicate spend. Without visibility, different departments buy overlapping tools. Marketing pays for a content platform. Sales pays for another. HR pays for a third. Finance doesn't see the overlap because the spend is distributed. Leadership thinks the stack is lean. It isn't.
The second is brand drift. When everyone generates content in their own private AI silos, the voice dilutes. Messaging becomes inconsistent. The brand starts to sound like a generic language model. Small shifts compound. And eventually, the company feels less like itself.
The third is risk and compliance exposure. People paste sensitive data into public models. Confidential documents get uploaded. Internal context leaks. Sometimes without anyone even realizing what they've done. Helen Patterson, an HR leader who specializes in building high-performing cultures for scaling businesses, pointed out something I see often: when organizations put hard guardrails around AI - like "you can only use Copilot, nothing else" - people just use their phones. A blanket "no" doesn't eliminate the behaviour. It just makes it invisible.
The fourth is wasted efficiency. This one is the most painful. Some of the most valuable AI workflows in your organization live inside people's heads right now. A marketer who's cut planning time in half. A customer service rep who turns prep notes into action in minutes. A manager who does performance feedback faster and better. Without visibility, those efficiencies stay trapped. They don't become shared practice. They don't compound. The organization pays for learning that never scales.
Experimentation Is Good. Invisibility Is Expensive.
I want to be clear about this, because the wrong takeaway from everything I've just described is "shut it all down".
Experimentation is a sign of a curious culture. You want people who look at new capability and ask, "How can I use this to make my work better?" That's initiative. That's innovation. That's people firing on all cylinders.
The leadership question is not "how do we stop experimentation?"
The question is: how do we make the learning visible so it compounds instead of disappearing into someone's personal ChatGPT history?
Jennifer Hufnagel, who runs AI education programs across industries, frames the whole AI journey in three stages: awareness, readiness, and adoption. Most organizations try to jump from awareness straight to adoption. They skip readiness entirely. And then they wonder why nothing sticks.
Readiness is where you figure out what's actually true. What tools exist. Where AI is already embedded (sometimes for years - resume filtering, ticket triage, CRM suggestions - without anyone calling it AI). What workflows people have quietly invented. What risks are accumulating? What guardrails exist, even informally?
That's the audit. And the audit is how you turn the unofficial universe from a liability into an asset.
From Shadow to Structured
Gazzy Amin, founder of Sales Beyond Scripts, showed me what this looks like at a small business scale. She literally asked her AI to run an audit of her own business: "Hey, I want to do a 2025 audit with you. Can you ask me a set of questions that will get you as much information as you need for us to do a deep dive on what worked, what didn't, and what I should focus on to scale?"
The AI gave her a structured set of questions across finances, content, opportunities, partnerships. Gazzy told me: "On my own, I wouldn't have come up with that."
Whether you're a team of 500 or a team of three, the question is the same: what's actually happening, and what should we do next?
For larger organizations, the shift from shadow to structured starts with a sentence. Not a policy document. A sentence, spoken out loud by leadership, that sounds something like this: "I know many of you are already using AI tools - our internal ones, and your own. I want to understand what you're doing so we can support you. Let's make this safe and strategic."
That sentence changes the room. It turns secrecy into collaboration. It tells people you're not here to shame them. You're here to build a system.
From there, you need three things. Simple guardrails - not a forty-page compliance document nobody reads, but a one-page "rules of the road" that answers three questions: what's OK to put into public tools, what must never leave the organization, and how do you ask for help?
Second, enterprise-level tools that make safe behaviour easier than unsafe behaviour. Shona Boyd's advice here is clear: pick one workhorse, set it up at the enterprise level, and connect it to your internal data so the answers are actually useful. Utility wins. Your job is to make utility safe.
Third, you need to surface the workflows that are already working. Find the people who've been quietly solving problems, and make their experiments visible so the whole organization can learn.
The Audit is the Starting Line.
If all you hand leadership is a list of problems, you'll get fear. If you hand them problems and opportunities, you'll get funding.
The audit should surface risks and leverage. Where are the quick wins? Where can AI save real time? Where are emerging best practices that should be shared? Where are deeper opportunities to redesign workflows entirely?
And here's the flywheel connection: the audit is a recurring discipline. The first one tells you where you are. The second, six months later, tells you what changed. The third tells you what's compounding and what stalled. Each one adjusts your understanding as the environment shifts. Because the environment will shift. New tools will arrive. Old tools will add features nobody asked for. Someone on your team will have quietly built something brilliant. Someone else will have quietly created a risk. The audit catches all of it.
Think of it as a regular reality check. Not a punishment. Not a compliance exercise. A moment to look up from the work and say: where are we actually?
Not where we hoped we'd be. Not where the vendor dashboard says we are. Where we actually are.
Your Move
Block one hour this week. You are not trying to finish an audit. You're trying to start one.
Choose one function - marketing, HR, sales, or operations. Write down the tools they use that likely include AI (paid and free), one workflow they repeat weekly, where AI shows up in that workflow today, one risk you suspect exists, and one opportunity you can test in the next 30 days.
One page. One slice of the picture.
Once you can see one slice clearly, it becomes easier to expand. And once you start mapping the invisible, you stop guessing - and start building on something real.
Because the unofficial universe isn't going away. The only question is whether you lead it or let it lead you.
*****
Susan Diaz is the host of AI Literacy for Entrepreneurs and the author of the forthcoming book 'Swan Dive Backwards'. She runs AI Power Circle, an AI implementation mastermind for founder-led businesses ready to stop producing more and start producing effectively. If that's where you are, find Susan Diaz on LinkedIn to see if this is a fit.
PS: Want a fillable scorecard to audit your marketing team's AI usage across 4 dimensions? It’ll take 15 minutes and change how you lead. Get the AI Audit Scorecard.