Skip to content
All posts

The shadow-AI register HKMA expects and most HK banks haven't built

April 20, 20267 min readDhruv Jain

Last November was the six-year anniversary of a quiet piece of paperwork that almost nobody at Hong Kong's mid-cap banks has actually built. On 1 November 2019, the Hong Kong Monetary Authority (HKMA, the banking regulator) issued a circular titled High-level Principles on Artificial Intelligence. Four principles, three pages, one central expectation that hasn't changed since: if your firm uses AI, the board should be able to produce a record of what's running, who owns it, what data touches it, and how it's monitored.

That record is what compliance people call an AI register. It's the document that answers the first question any inspector asks when the topic comes up: "Can you hand me a list of every AI system operating at the firm?"

Six years after that circular, and a month after the HKMA, SFC, Insurance Authority, and MPFA jointly launched the GenAI Sandbox++ on 5 March 2026, most 50-to-500 person banks and insurers in Hong Kong still can't answer that question cleanly. Not because they don't want to. Because the tools moved faster than the paperwork.

Why 2019 paperwork matters in 2026

The 2019 circular is often dismissed as dated, which is a fair first impression. It was written three years before ChatGPT became a product anyone could sign up for, and it contains no specific clauses on generative AI or large language models.

That framing misses the point. Principle 2 of the circular asks for two things that map directly onto generative AI: "audit logs and documentation during the design phase" of any AI application, and "continuous monitoring" of the model once it's running. The document is technology-neutral by design, which is how HKMA's supervisory approach usually reads. The regulator states the principle and leaves the firm to work out the specific control.

What changed in 2026 isn't the principle. What changed is the volume of AI in regulated firms. Three things happened in the last 18 months. The first was the 48 authorised institutions that completed their machine-learning and transaction-monitoring AI plans by end-March 2025 under HKMA's anti-money-laundering initiative. The second was the formal inclusion of AI as a 2026 supervisory focus area alongside digital assets and climate risk. The third was the GenAI Sandbox++, which explicitly gives firms a safe lane to pilot generative AI if they have the governance in place.

The principle stayed the same. The gap between the principle and what most firms can produce widened.

What shadow AI actually looks like inside a mid-cap firm

Before a register can be useful, it has to see the whole surface. Shadow AI is any AI tool touching the firm's data without being inventoried or governed. In Hong Kong banks, insurers, law firms, and family offices in the 50-to-500 person range, it shows up in five predictable ways.

Staff sign up for ChatGPT, Claude, or Perplexity with their personal email addresses. IT has no visibility, no audit log, no way to revoke access when someone leaves. The tool works well, it saves time, and nobody told them not to.

Free-tier chat windows retain the input by default. When someone pastes a customer statement, a loan application, or a KYC file (the identity-verification documents the bank keeps on every customer) into a free ChatGPT window for a rewrite, that text now sits on a third-party server outside the firm's data-residency perimeter (the set of countries where the firm is allowed to store customer data under the law). Under the Personal Data Privacy Ordinance (the Hong Kong law governing personal data), that's a disclosure event the firm has no record of.

Chrome extensions marketed as "productivity tools" quietly route every click or highlighted text to a remote model running somewhere outside the firm. Staff install them because they're genuinely helpful, which is exactly why they bypass procurement review so easily.

Operations people build automations where Zapier calls ChatGPT calls Gmail in a chain nobody on the security team ever reviewed. These workflows are useful and well-intentioned and completely ungoverned at the same time. There is no single sign-on (the central login the firm uses to control who can access what), no logging, and no unwind path if something in the chain breaks in production.

And the quiet one: staff quote AI output in internal memos or client-facing documents without saying where it came from. The output can't be verified, the training data is unknown, and there's no audit trail.

Any one of these on its own is manageable. Three or four running at the same firm with no documentation is the governance gap HKMA principles were written to catch.

The four-column register

The tool I build first in every audit engagement is a register, which is a single page with four columns and nothing more.

The first column names the tool itself, including vendor and the specific access method staff use to reach it. Examples from a typical mid-cap firm include ChatGPT via personal email, Claude Pro via a team account, and Copilot through individual Microsoft 365 licences the firm doesn't centrally manage. Be specific about the access path, because that's where the control gap actually lives.

The second column is user count. How many named people use this tool more than once a week. Not everyone who tried it once. The repeat-user number is what matters because repeat use is where training data and conversation history accumulate.

The third column is data classification, which captures what kind of data actually touches the tool in day-to-day use. The four standard tiers are public, internal, confidential, and regulated, and the category a tool sits in drives almost everything downstream in the register.

The fourth column is risk tier, calculated by multiplying the data class against the size of the control gap around the tool. Regulated data sitting inside a free-tier personal-account tool is the worst case and scores red on the register. Public data in an enterprise tool with single sign-on and audit logs scores green. The middle range is where most Hong Kong firms actually live, and it is where the most useful prioritisation happens.

That's the entire register when you print it out, and the value is never in the format. The value lives in actually sitting down and filling the thing out against real firm data rather than an ideal picture.

The 90-minute session

Block a single 90-minute working session with whoever owns IT and whoever owns compliance. Before the session starts, pull four data sources into one folder.

The first is browser DNS or proxy logs for the last 30 days (the internal record of which websites staff laptops actually connected to), filtered for known AI vendor domains like openai.com, anthropic.com, perplexity.ai, and the major Chrome extension domains. The second is SSO login events across the last 90 days (records from the firm's central login system showing which apps staff signed into) to catch tools that are already wired into corporate authentication. The third is an anonymous staff survey, five questions, sent 48 hours earlier: "What AI tool did you use this week? Personal or firm account? What kind of data went into it?" The fourth is expense-report keyword searches for "ChatGPT", "Claude", "Perplexity", "Copilot", "Jasper", and "Midjourney" across the last quarter.

In the session itself, populate the register row by row. Tools first, then user counts, then data class, then risk tier. Don't debate risk scores to three decimal places. Lock them at your best estimate. The register is more useful 80% right this week than 100% right in three months.

At the end, sign it, date it, put the owner's name at the top, and schedule the next review for 45 days out.

Three pitfalls worth naming

First pitfall. Teams build the register from one vendor's admin dashboard. Every enterprise AI vendor offers one for their own product. The dashboard can't see competitors, personal accounts, or Chrome extensions. Sources have to come from the whole firm, not one tool.

Second pitfall. Teams treat the register as a one-time artifact. Shadow AI grows weekly as staff discover new tools. Without a named owner, a cadence, and a trigger for new tools, the register is stale within 60 days.

Third pitfall. The register sits in a document with no connection to action. An inspector reads that gap instantly. If the highest-risk row has been sitting in red for three months with no plan attached, the register hurts more than it helps.

This week

If you run compliance, risk, IT, or operations at a regulated firm in Hong Kong, Singapore, or Dubai, reply to this email with the word MAP.

I'll send the 1-page register template I use in every audit engagement, fully editable and free for newsletter readers. The Policy Pack one-pager ships on Friday alongside a short walkthrough.

A small number of Q2 audit slots are open. First intake begins 5 May.

Until the Friday issue lands in your inbox,

Dhruv.

Request an AI Readiness Review

For CTOs, operators, department heads, and compliance leaders who need a practical path from scattered AI usage to governed adoption.

20-min review — exposure, use cases, next step
Your data stays yours — NDA on day one

Opens Cal.com to select your slot

Need context first? Read the proof, case studies or subscribe to the weekly essay.

Q2 AI readiness window

Find the shadow-AI risk before it becomes policy debt.

In 20 minutes, we'll identify the department to review first, the AI usage surface you can't see yet, and whether a readiness audit, workshop, or private AI pilot is the right next step.

NDA-ready20-minute executive reviewNo tool pitchFor regulated or data-sensitive teams

Best fit: CTOs, operators, and compliance leads who need a governed first AI use case.

Review output

Your first governed AI use case

Actionable
01

First department to review

Where AI usage is already creating leverage, risk, or hidden process drift.

02

Shadow-AI exposure surface

The workflows, data paths, and approval gaps leadership cannot currently see.

03

Approval-worthy next step

A readiness audit, workshop, or private pilot scoped for governance first.

The urgency is not hype. Once teams normalize ungoverned AI habits, cleanup becomes policy debt, retraining, and slower approvals.