Skip to content
All posts

93 days to comply: what the EU AI Act means for Asian firms

May 3, 20264 min readDhruv Jain

Why this applies to you

If you're running a regulated firm in Hong Kong or Singapore, you might assume the EU AI Act is a European problem. It's not.

The law reaches across borders and catches three types of Asian firms:

  • Firms that serve EU-based clients

  • Firms that handle data from EU residents, no matter where the servers sit

  • Firms that run AI systems people in the EU can reach

That third one is wider than most legal teams first think. A chatbot on your website that someone in Paris can reach, a client portal with AI features, or a tool used by your London colleagues all count.

I've talked to compliance leads at banks, insurers, and law firms across HK and SG over the past few months. The most common reply is "we need to look into it" and then nothing happens for months. That window is closing fast.

What changed this week

The European AI Office put out new guidance on high-risk AI system rules in the past two weeks. Two changes matter for firms in Asia.

First, the scope for financial services AI got a bit smaller. Basic rule-based systems with little AI in them may now fall outside the high-risk bucket. Good news if your firm runs standard compliance automation that doesn't use machine learning.

Second, the scope for HR and hiring AI got much wider. If your firm uses AI to screen job applicants, score performance, or manage schedules, those tools now need a full check under the Act. This one caught a lot of firms off guard.

For Asian firms specifically: the widened HR scope matters because many firms deployed recruitment AI tools during 2024 and 2025 without governance frameworks. Those tools are now explicitly high-risk under EU law if any of the candidates or employees are EU residents.

The minimum viable compliance path

You don't need to boil the ocean. Here's what the next 93 days need to cover at minimum.

Month 1: Scope determination

Answer these three questions for every AI system your firm operates:

  1. Does it serve, process data from, or become accessible to EU residents or clients?

  2. If yes, does it fall into a prohibited, high-risk, limited-risk, or minimal-risk category under the Act?

  3. For high-risk systems, do you have the documentation, risk management, and human oversight requirements in place?

Most firms I've worked with find that 2 to 4 of their AI systems are in scope and 1 to 2 are potentially high-risk. The number is manageable. The documentation gap is what takes time.

Month 2: Documentation and risk assessment

For each in-scope system, you need at minimum:

  • A technical description of the system's purpose, design, and intended use

  • A risk management system that identifies and mitigates foreseeable risks

  • Documentation of the training data, including data governance and data management practices

  • Human oversight measures that allow natural persons to effectively oversee the system

  • Accuracy and cybersecurity specifications with documented test results

This isn't optional paperwork. These are legally mandated requirements that will be audited.

Month 3: Ongoing compliance architecture

Set up the recurring processes:

  • Assign a named compliance owner for each in-scope AI system

  • Establish a quarterly review cycle for risk assessments

  • Create an incident reporting channel for AI system failures or unexpected behavior

  • Document your conformity assessment process for any new AI systems before deployment

What noncompliance looks like

The penalties under the EU AI Act are meaningful. Prohibited AI practices carry fines up to 35 million EUR or 7% of global annual turnover, whichever is higher. High-risk noncompliance carries fines up to 15 million EUR or 3% of turnover.

For context, a regional bank with HK$5B in annual revenue faces potential exposure of HK$1.2B for prohibited practices or HK$500M for high-risk noncompliance.

The more likely near-term risk is reputational and commercial. EU-based clients and partners will increasingly require proof of AI Act compliance as a condition of doing business. The firms that get ahead of this will have a commercial advantage over competitors who are still scrambling after August 2.

The one thing to do this week

Pull a list of every AI system your firm currently operates. For each one, answer this question: could an EU resident interact with this system, have their data processed by it, or be affected by its output?

If the answer is yes for even one system, you have work to do before August 2.


I've been tracking weekly EU AI Act developments in a one-page briefing. If you want this week's update, reply to this email with "UPDATE" and I'll add you to the list.

If you want to talk about your firm's EU AI Act readiness, reply with "AUDIT" and I'll send our assessment scope.

Until the next issue, Dhruv.

Request an AI Readiness Review

For CTOs, operators, department heads, and compliance leaders who need to expose shadow-AI usage and turn one high-value workflow into a governed first use case.

20-min review — exposure, use cases, next step
Your data stays yours — NDA on day one

Opens Cal.com to select your slot

Need context first? Read the governance proof, case studies or subscribe to the weekly essay.

Q2 AI readiness window

Find the shadow-AI risk before it becomes policy debt.

In 20 minutes, we'll identify the department to review first, the AI usage surface you can't see yet, and the governed first use case leadership could approve next.

NDA-ready20-minute executive reviewNo tool pitchFor regulated or data-sensitive teams

Best fit: CTOs, operators, and compliance leads who need a governed first AI use case.

Review output

Your first governed AI use case

Actionable
01

First department to review

Where AI usage is already creating leverage, risk, or hidden process drift.

02

Shadow-AI exposure surface

The workflows, data paths, and approval gaps leadership cannot currently see.

03

Approval-worthy next step

A readiness audit, workshop, or private pilot scoped for governance first.

The urgency is not hype. Once teams normalize ungoverned AI habits, cleanup becomes policy debt, retraining, and slower approvals.