Skip to content
All posts

The Shadow AI Playbook

May 13, 20265 min readDhruv Jain

In the 1980s and 90s, employees brought personal laptops and dial-up modems inside corporate networks without going through IT. Nobody was hiding anything on purpose, but the tools worked well enough that people kept using them, productivity climbed, and IT never found out about most of it.

Security teams eventually gave this a name: Shadow IT. Over the next decade, they built the entire modern enterprise security playbook around solving it.

The firms that mapped Shadow IT earliest were the ones that didn’t get breached when the real threats arrived. The firms that ignored it spent years cleaning up incidents that never should have happened.

We’re in the same moment with AI right now. The browser tab has replaced the personal laptop, and ChatGPT accessed through a personal account on a work machine has replaced the USB drive. Compliance teams don’t know about half of what’s happening, and the pattern is close enough to the 90s version that I’ve started calling it Shadow AI.

The number that keeps showing up

The pattern is simple enough that you can usually spot it in the first inventory conversation.

The official procurement log shows one version of AI use. Department workflows show another.

The gap doesn’t exist because anyone was trying to circumvent governance. These tools spread through teams because they work well enough that nobody thinks to mention them, and compliance never hears about them until someone finally asks the question directly.

Why the gap matters now

Three things make this gap more urgent than the Shadow IT version was.

The first is output scope. Shadow IT from the 90s created network vulnerabilities that stayed inside the firm’s perimeter. Shadow AI creates regulatory exposure that can extend outward to clients, business partners, and regulators. If an undocumented AI tool produces output that reaches anyone in the EU, your firm may have deployer obligations, which means responsibilities that apply to the organization using the AI system, under the EU AI Act regardless of whether compliance ever signed off on the tool.

The second is adoption speed. Shadow IT spread over years as hardware moved through organizations at the pace of purchase orders. Shadow AI spreads in weeks because a team lead discovers a tool that saves 3 hours of work, shares it with the department by Friday, and the tool is embedded in their workflow before anyone in compliance knows it exists.

The third is the evidence trail. Every interaction with a cloud AI tool generates data that lives on someone’s servers. That data can be subpoenaed, audited, or flagged during a regulatory review, and the question for your firm isn’t whether the data exists but whether you knew it existed before the auditor did.

The 5-zone framework

AI compliance isn’t binary. In practice, every tool at your firm falls somewhere on a gradient from fully governed to completely invisible.

Zone 1 is fully documented: named oversight person, complete audit trail, risk tier classified. This is where your approved tools should live.

Zone 2 is approved but poorly documented. Someone signed off at some point, but there’s no named oversight person, no risk classification, and the documentation wouldn’t survive a follow-up question from an auditor who wanted specifics.

Zone 3 is the honest middle ground: known but unapproved. Compliance knows about the tool and it hasn’t been formally approved or documented. Most firms have a handful of tools sitting here, and that’s fine as a temporary state.

Zone 4 is where the largest cluster usually sits: unknown to compliance entirely. Department teams use these tools daily, IT might be vaguely aware, but nobody in compliance has them on any list.

Zone 5 is actively concealed, where someone knows the tool wouldn’t get approved and uses it anyway. This is rare, but it does happen at firms where the approval process feels slow enough that people route around it.

Most firms should assume their tools are scattered across all five zones until the inventory proves otherwise. The goal isn’t to move everything to Zone 1 overnight because that level of documentation takes real work. The goal is to move everything from Zone 4 and 5 (where it’s invisible) to Zone 3 (where it’s at least visible) first. Visibility before documentation, and documentation before full compliance.

The Shadow IT playbook, adapted

The firms that solved Shadow IT fastest all followed the same five-step sequence, and it translates directly to the AI version.

Step 1 is inventory. Ask every department head one simple question: what AI tools does your team use that IT doesn’t have on record? This takes about 30 minutes per department, and the list always comes back longer than anyone expected.

Step 2 is classification. For each tool on the list, answer three questions: does the output reach anyone outside the firm, is a named person responsible for oversight, and would the existing documentation survive a regulatory inquiry?

Step 3 is prioritization. The tools with external-facing output and no documentation are your highest-risk items, and they’re where you start. Internal-only tools with no regulatory exposure can wait for the next cycle.

Step 4 is documentation. For each high-priority tool, create a one-page record covering what it does, who’s responsible for oversight, what data it touches, where the output goes, and when it was last reviewed.

Step 5 is the review cycle. Set a quarterly cadence because new tools appear constantly and the inventory is never finished. Treat it as a living document, not a one-time project.

The starting point

If I could give every compliance lead at a regulated firm in HK and SG one piece of advice this month, it would be this: pick one afternoon this week, walk through the building, and ask every department head what AI tools their team actually uses.

That conversation is your starting point for everything else. The policy, the documentation, the risk classification, the review cycles. All of it builds on knowing what your organization is actually using right now.

The firms that have this conversation first are the ones that won’t be scrambling when the auditor arrives asking questions they can’t answer.


I built a 5-question self-audit version of this framework. If you’re a compliance lead at a regulated firm and want to run it yourself, reply to this email and I’ll send it over.

Private readiness call

Make the first conversation worth a senior person's time.

Share the minimum context first. Then choose a time for a focused AI readiness call about policy, workflow ownership, and safe adoption.

Qualified intake

No generic sales funnel

Exposure map

Where AI is already used, which teams touch it, and what data may be involved.

Guardrail matrix

Allowed, conditional, and blocked usage translated into daily operating rules.

Approval path

The first workflow, owner, review points, and next step leadership can evaluate.

Book a Private AI Readiness Call

The calendar appears after the intake questions so the call starts with enough context.

00-10Current state

What staff are already using and where the exposure may sit.

10-20Governance gap

Which policy, owner, review, or training gap is blocking safe adoption.

20-30Next artifact

Which map, matrix, or approval path should be created first.

Need context first? Read the AI adoption notes.

The first step is a conversation.

Book a 30-minute readiness call. You will leave with a clearer view of your team's AI usage, the risks, and which workflow should be governed first.

NDA-readyFounder-ledNo tool pitch

Follow the AI adoption notes

Weekly workflows, governance lessons, and APAC adoption notes. See recent posts.

Subscribe on Substack