Skip to content
All posts

The auditor's first question. 93 days to get it right.

May 6, 20269 min readDhruv Jain

The question isn’t whether you use AI. The question is whether your AI produces outputs that affect EU residents — and most HK/SG firms can’t answer it cleanly.


I was on a scope call with a small family office in Singapore about six weeks ago. Two partners, their head of operations, and a compliance consultant they had brought in specifically because they had started seeing EU AI Act references in their institutional client onboarding forms.

We had been talking for about twenty minutes when I asked the first question on the pre-flight check: does your firm currently use any AI system whose output is received by, or used to make decisions about, an EU resident?

One partner looked at the other. The other looked at the compliance consultant.

“We use AI tools,” the head of operations said. “But they’re all IT-approved. We went through that process last year.”

That answer isn’t wrong. It’s just not an answer to the question I asked.

This happens in nearly every scope conversation I have with regulated firms in this region. The firms are not careless. They’ve done vendor assessments. They have approved tool lists. They have someone on the compliance side who has read at least one article about the EU AI Act and flagged it to leadership.

But when the auditor asks about scope, the answer they give is about their internal process for adopting tools, not about where those tools’ outputs land. Those are two entirely different questions, and only one of them is what the regulation asks.

The wrong question firms are answering

The most common assumption I encounter goes like this: the EU AI Act is a European regulation, our firm is incorporated in Hong Kong or Singapore, our servers are in Asia, and therefore we have limited or no exposure.

This reasoning fails on the first question of the scope check, every time.

The Act applies based on where the output of an AI system goes and who it affects, not where your firm is registered or where your data sits. If your AI system produces outputs that flow to EU residents, your firm is in scope for the obligations attached to that system’s risk tier. The location of your incorporation is not a factor. The location of your servers is not a factor. The nationality of your staff is not a factor.

I watched that Singapore family office realise mid-call that two of their portfolio advisory tools produce outputs that flow directly to EU-based beneficiaries. Their clients are family offices and high-net-worth individuals across Europe. They had categorised this as someone else’s jurisdictional problem. It wasn’t.

This is not an edge case. Most HK and SG firms with EU-resident clients, EU-domiciled funds, or EU-facing advisory relationships have exposure they have not yet mapped.

What the auditor is actually asking

The pre-flight check I use in scope conversations has five questions. Here is what each one is actually trying to establish, in plain language.

Question 1: Does the AI system’s output affect an EU resident?

This is the scope gate. A “yes” puts you inside the regulation. The auditor is not asking where you are. They are asking where your AI’s output goes. If your portfolio management system surfaces recommendations reviewed by an EU-based relationship manager, or if your screening tool processes applications from EU nationals, you have entered scope for at least one system. Most firms fail this question not because the answer is yes, but because they’ve never asked it in a way that traces the output to a person.

Question 2: What is the system’s risk classification under the Act?

Once scope is established, the auditor needs to know what tier the system sits in. The Act defines four risk tiers. High-risk systems, the ones that affect consequential decisions about people’s access to credit, employment, insurance, or other significant outcomes, carry the most onerous obligations. The April 2026 guidance update narrowed scope for basic transaction monitoring tools, which is good news for some firms. But the same update expanded scope for HR and recruitment AI wherever outputs affect EU residents, regardless of the firm’s industry classification. A financial services firm is not automatically outside scope for employment-related AI.

Question 3: Do you have a technical description of the system in its current state?

This is where approved tools start to fail the audit even when they shouldn’t. The technical description must reflect the live production system, not the version that existed at evaluation. I have reviewed documentation at nine firms this quarter. None of them had all five required Q4 documents in place when I arrived. The gap that appears most often is not the technical description itself; firms usually have something. The gap is that the technical description was written at the point of approval, and the system has changed since then without a corresponding update to the documentation.

Question 4: Is there a named individual with documented authority to intervene?

This is the question that surprises most compliance teams. The Act requires what it calls a named oversight mechanism: not a team, not a role, not a department. A specific named individual who has documented authority to pause, intervene in, or override an AI system when it produces a harmful or erroneous output. I have seen firms write “the compliance team” in this field. I have seen firms write “the CTO.” Neither qualifies. The auditor asks for a name. The individual must exist, must understand the role, and must be formally documented as holding it.

Question 5: Can you produce your conformity assessment timeline?

This is the question that surfaces whether a firm has done the work or has simply planned to. A conformity assessment for a high-risk system requires the technical description, a risk management process, data governance records, the named oversight mechanism, and cybersecurity specifications including test results. Done well, with documentation already in place, this takes 30 to 45 days for one system. At a firm doing it for the first time, building documentation while the assessment runs, the same process takes closer to 90 days. The August 2 deadline is 93 days away as of this week. A firm with two high-risk systems and no documentation is already operating at the edge of what is achievable.

What the April guidance actually changed

The April 2026 guidance update received most of its coverage for what it narrowed. Basic transaction monitoring, the kind of rules-based scoring that most banks have been running for years, was clarified as sitting outside the high-risk category in most implementations. That was welcome news, and compliance leads across the region noticed it.

What received less attention was the expansion on the other side.

HR and recruitment AI now qualifies as high-risk wherever its outputs affect EU residents, regardless of what your firm does for a living. This is not limited to hiring platforms or dedicated HR software. If a regulated firm uses an AI-assisted screening tool, a CV parsing system, or a capability assessment product that helps filter or rank candidates, and any of those candidates are EU residents, that system is now explicitly high-risk under the Act. The financial services classification that many firms assumed put them outside this obligation does not apply. The relevant criterion is not industry. It is the nature of the output and who it affects.

I have spoken with three firms this week who had assumed they were outside scope for this kind of system. None of them had mapped their recruitment tooling against the April guidance.

Why IT-approved tools can still fail the audit

The Saturday note I put out this week addressed this directly, but it is worth expanding here because it is the most common misunderstanding I see.

Most compliance teams define shadow AI as tools that IT does not know about. The more precise definition, and the one an auditor will use, is any AI system that your compliance documentation does not currently account for or describe accurately.

An IT-approved, fully sanctioned tool qualifies as a documentation gap if your records still reflect the evaluation state rather than the live system. Vendors release new model versions. Software configurations change. Eighteen months of normal product development can separate the tool your team evaluated from the tool your team runs today. If the technical description your compliance team has on file was written at the point of approval and has not been updated since, you have a documentation exposure, even if the tool itself is perfectly legitimate.

The question the auditor asks is not “did you approve this tool?” The question is “does your documentation reflect this tool as it currently operates?”

These are not the same question. Firms that have done thorough vendor assessments sometimes have better approval records and worse documentation currency than firms that simply have fewer tools. What you need is not a longer approved list. It is a more accurate one.

The time math

The August 2 deadline applies to the obligations for high-risk AI systems under Annex III of the Act. After that date, firms with EU exposure to high-risk systems and no conformity assessment in place are operating out of compliance.

The timeline for getting into compliance depends on where you start.

If your documentation is already current, a conformity assessment for one system takes 30 to 45 days. If you’re starting from scratch, which means building the technical description, establishing the risk management process, writing the data governance records, naming the oversight individual, and producing the cybersecurity specifications, the same assessment takes 90 days at a minimum, because you’re building while the clock runs.

Two high-risk systems with no documentation, 93 days to the deadline, means the window has already closed for a comfortable, unhurried process. It has not closed entirely, but what is left requires prioritising the mapping and documentation immediately rather than treating August 2 as a planning horizon.

The first step, and the one I see skipped most often, is establishing which systems actually qualify as high-risk under the Act. That mapping is what determines everything else in the timeline. Without it, firms are either working on systems that don’t need conformity assessments, or ignoring systems that do.

What to do before next week

If you haven’t yet mapped your AI systems against the five pre-flight questions, that is the starting point. Not vendor reassessment. Not a policy update. The mapping.

Specifically: for each AI system your firm currently runs, trace the output to a person. If that person could be an EU resident, establish the risk tier. If the tier is high-risk, check whether you have a current technical description, a named oversight individual, and a realistic timeline to complete the conformity assessment before August 2.

That exercise will tell you whether you have a documentation problem, a timeline problem, or both.


Five complimentary audit slots, next week

I’m opening five complimentary 30-minute scope conversations next week, specifically for CIOs, CCOs, and COOs at regulated firms in HK or SG that have EU-resident clients and have not yet completed a formal AI scope assessment under the Act.

In 30 minutes, we will run through the pre-flight check together, identify which of your systems require conformity assessments, and work out whether your timeline is achievable before August 2.

To claim a slot, reply to this email with “SCOPE” in the subject line. Tell me one sentence about your firm’s structure and one sentence about the AI tools you currently have approved. I will confirm your slot within 24 hours and send a short pre-call questionnaire.

These slots are first-come. I will close the offer when all five are confirmed.


If you found this useful, the most helpful thing you can do is forward it to a colleague who is dealing with the same question. Most of the regulated firms in this region are having this conversation internally right now without a clear framework for what the auditor will actually ask.

If we haven’t connected yet, you can find me on LinkedIn and X. Robossist, the firm behind this work, is at robossist.com.

Until the next issue,
Dhruv.

Book a Shadow AI Audit

For CTOs, operators, department heads, and compliance leaders who need to expose shadow-AI usage and turn one high-value workflow into a governed first use case.

30-min review, exposure, use cases, next step
Your data stays yours, NDA on day one

Who this call is for

Regulated and data-sensitive teams in Hong Kong where AI usage already spans multiple departments and leadership needs a governance artifact before broader rollout.

What we cover

Shadow AI exposure map, top 3 risk points, scope for a governed first use case. NDA-ready from day one.

Not a fit if

You're a solo founder, an R&D team without compliance requirements, or shopping for a software license.

Opens Cal.com to select your slot

Need context first? Read the governance proof, case studies or subscribe to the weekly essay.

Find the shadow-AI risk before it becomes policy debt.

In 30 minutes, we'll identify the department to review first, the AI usage surface you can't see yet, and the governed first use case leadership could approve next.

NDA-ready30-minute executive reviewNo tool pitchFor regulated or data-sensitive teams

Best fit: CTOs, operators, and compliance leads who need a governed first AI use case.

Review output

Your first governed AI use case

Actionable
01

First department to review

Where AI usage is already creating leverage, risk, or hidden process drift.

02

Shadow-AI exposure surface

The workflows, data paths, and approval gaps leadership cannot currently see.

03

Approval-worthy next step

A readiness audit, workshop, or private pilot scoped for governance first.

The urgency is not hype. Once teams normalize ungoverned AI habits, cleanup becomes policy debt, retraining, and slower approvals.