Your team is making AI decisions without you
The conversation that keeps repeating
I've had the same conversation with compliance leads at regulated firms in Hong Kong and Singapore at least a dozen times now. It starts with them pulling up their AI approval register. Usually it has 3 to 5 entries, all from late 2023 or early 2024.
Then I ask how many AI tools their team actually uses today.
The number is always somewhere between 8 and 15. Sometimes higher. The gap between the official register and reality isn't a minor oversight. It's a structural blind spot that grows wider every week someone doesn't close it.
Here's why that matters for regulated firms specifically: every tool in that gap is processing data under no documented governance. No risk assessment, no data flow mapping, no vendor due diligence. If an auditor or regulator asks about your AI governance posture tomorrow, the register says "3 tools, approved" while reality says "12 tools, unknown risk."
Why policies haven't kept up
The usual explanation is that compliance teams are busy. That's true but it misses the actual problem.
AI tools don't look like procurement decisions. Nobody files a purchase order for a Chrome extension. Nobody routes a free-tier ChatGPT account through vendor onboarding. The adoption path for AI tools bypasses every checkpoint your compliance process relies on because the tools are free, instant, and individually harmless.
The aggregate effect is not harmless. Fifteen people using fifteen different AI tools to summarize client documents creates fifteen potential data exposure points that your incident response plan doesn't cover.
The three-zone sorting model
When I sit down with a client to close this gap, we don't start with writing policy. We start with mapping what's already running, and the exercise breaks naturally into three zones.
The sanctioned stack is zone one. These are tools that IT approved, compliance signed off on, and your team has documented access logs for. At most firms I've worked with, zone one covers 2 to 4 tools out of the full inventory.
Zone two is the tolerated grey area where leadership knows certain tools exist but hasn't formally dealt with them. People use ChatGPT for drafting, Copilot for code review, Gemini for summarizing reports. Nobody has formally approved or rejected them. There's no risk assessment on file and no documented data flow.
Zone three is the invisible layer. Free tiers signed up with work emails, browser extensions installed without IT involvement, personal AI accounts used for work tasks. These tools don't appear in any system because they were never purchased.
The critical insight: most of the actual risk sits in zone two, not zone three. Zone two is where the most sensitive data flows through tools that have organizational awareness but zero documentation.
How to run the sort
You don't need a consultant for the first pass. Here's the sequence we walk clients through.
Inventory (week 1): Ask every department head to list every AI tool their team has used in the past 90 days. Don't limit it to "official" tools. Include browser extensions, free accounts, and anything accessed through a personal device for work purposes.
Zone mapping (week 2): Place each tool in zone one, two, or three based on whether it has formal approval, informal awareness, or neither. The zone placement tells you the documentation gap, not the risk level.
Risk prioritization (week 3): For each zone two and zone three tool, answer three questions. Does it handle regulated data? Does the vendor's terms of service allow enterprise use? Can you produce an audit trail?
Migration plan (week 4): For each tool you want to keep, build the documentation that moves it to zone one. For each tool you want to remove, document the replacement and the transition timeline.
The 90-day timeline
The firms that close this gap fastest share one trait: they assign a named person to own the register, not a committee.
A committee will meet quarterly. A named owner will update the register weekly. The difference in velocity is the difference between closing the gap in 90 days and still talking about it in 12 months.
Here's a realistic 90-day timeline that we've seen work.
Days 1 to 14: Complete the inventory and zone mapping. You now know the size of the gap.
Days 15 to 30: Risk-prioritize every zone two tool. Flag the 3 to 5 that handle the most sensitive data.
Days 31 to 60: Build documentation and approval for the priority tools. Move them from zone two to zone one.
Days 61 to 90: Address zone three tools. Decide which to formalize, replace, or remove. Update the register and schedule the first quarterly review.
What happens when you don't close it
I want to be direct about this because I keep hearing "we'll get to it next quarter."
Every week your AI register stays incomplete is a week where your team is making data governance decisions on their own, tool by tool, without compliance input. That's not a hypothetical risk. That's what's happening right now at most firms that haven't done this exercise.
The cost isn't abstract either. It shows up as audit findings, regulatory questions you can't answer, vendor contracts that don't cover your actual use case, and incident response plans that don't account for the AI tools where the incident actually happens.
The one thing to do this week: ask your three largest department heads to list every AI tool their teams have used in the past 90 days. You'll know the size of the gap within 48 hours.
If you want the one-page sorting template and the 90-day timeline document we use with clients, hit reply and I'll send it over.
If you're running a regulated firm between 50 and 500 employees in HK, SG, or Dubai and want to talk about closing this gap, reply to this email with "AUDIT" and I'll send you our workshop assessment scope.
Until the next issue, Dhruv.