Skip to content
All posts

The 30-Minute Watch

April 10, 20265 min readDhruv Jain

Most AI pilots are decided in the wrong room. Someone looks at a process map, circles a step, picks a tool, and announces the pilot at the next leadership meeting. The tool gets installed, a champion volunteers, and three months later the deployment quietly flatlines around fifteen percent adoption. Everyone blames the tool.

The tool was rarely the problem.

Why process maps lie

A process map is what a team says it does. The work is what the team actually does. Those two things are almost never the same document, and the gap between them is where most automation projects go to die.

When a process gets written down, the person doing the writing leaves out everything that feels embarrassing, obvious, or hard to explain. They leave out the sticky note on the second monitor. They leave out the Slack DM they send to the one person in accounting who knows how to override the date field. They leave out the fact that the system actually breaks every third Tuesday and someone has quietly memorized how to restart it.

None of that ends up in the documentation. All of it ends up in the workflow.

An interview doesn't surface this either. When you sit across from someone in a conference room and ask them to walk you through their day, they give you the cleaned-up version. The version they'd tell a new hire. Not the version that actually runs the business.

What shows up in thirty minutes

There's a cheaper way to find out what a team does. You sit next to the person doing the work, and you watch for half an hour. You don't ask questions. You don't take notes on how things should work. You write down what actually happens, in the order it happens.

Three things almost always surface in the first thirty minutes, and I keep running into the same three categories:

The undocumented workaround. Somebody on the team invented a fix two years ago for a problem nobody remembers the cause of. The fix is now load-bearing. It isn't in any wiki. It isn't in any runbook. It lives in one person's muscle memory, and the whole process quietly depends on it. You only find this by watching, because the person doing it doesn't think of it as a workaround anymore. They think of it as the job.

The small step that eats four hours a week. On paper it takes two minutes. In practice it takes twenty, because the system is slow, or the data needs to be re-keyed, or the tab they need keeps timing out. Nobody complains about it out loud because each individual instance feels trivial. Then you multiply it across a week and realize this one step is the biggest time leak in the entire pipeline.

The bottleneck that isn't where leadership thinks it is. Executives point at the step that feels painful, usually because that's the step the team complains about in meetings. The actual bottleneck is somewhere three steps upstream, quieter, and contributing ninety percent of the wait time. The complainers are downstream of the real problem.

These three don't show up in an audit deck. They show up when you shut up and watch.

A twenty-minute protocol you can run today

If you want to try this on your own team this afternoon, it's six steps and it takes less than half an hour:

  1. Pick one person in one role. Not a whole team. One seat.

  2. Ask to sit next to them during a real work block. Not a demo. A real one, with the real inbox open.

  3. Don't ask questions during the block. Resist the urge. Every question you ask makes the work they're showing you more performative.

  4. Write down every pause longer than fifteen seconds. That's usually the system being slow, a lookup they have to do somewhere else, or a small decision they're chewing on.

  5. Write down every tool or tab switch. Context switches are where time leaks that nobody reports on their status updates.

  6. At the end, ask one question. "What did you skip telling me because it felt embarrassing or obvious?" That single question will usually surface the workaround you were hoping to find.

Twenty minutes of observation plus one question at the end will teach you more about where your real bottlenecks are than a two-hour stakeholder interview ever will.

Why this beats the audit call

The usual approach is to schedule a one-hour audit interview with the team lead. Document the process. Circle pain points. Propose tools. It's clean, it's billable, and it almost always misses the real problems.

The sit-and-watch finds things the interview never will, because interviews only capture the parts of the work people know how to describe. The sit-and-watch captures the parts they've stopped noticing. Those are the parts worth automating.

It also does something the audit deck doesn't. It builds trust with the operator. When you spend thirty minutes watching someone do their job without judging them or interrupting them, they start telling you the things they'd never write in a survey. You learn which steps they'd secretly love to hand off, which parts they actually enjoy and don't want automated away, and which edge cases the "clean" version of the process will crash on in production.

None of that comes out of a discovery call.

The part nobody wants to hear

The hardest thing about recommending this is that it sounds too simple to charge for. Thirty minutes of watching? That's not consulting, that's just paying attention.

But the teams I see getting the most out of their AI investments all do some version of this before they pick a tool. The teams that skip it end up automating the thing leadership pointed at, wondering why adoption stalled, and blaming the vendor.

You can't automate what you haven't observed. And you can't observe it if you're sitting in a conference room asking about it.

So before the next pilot, before the next tool eval, before the next RFP, pick one seat, pull up a chair, and watch for half an hour. Then decide what to automate.


What's the step in your own workflow you'd be most embarrassed for someone to watch? That's usually the one worth looking at first.

Request an AI Readiness Review

For CTOs, operators, department heads, and compliance leaders who need a practical path from scattered AI usage to governed adoption.

20-min review — exposure, use cases, next step
Your data stays yours — NDA on day one

Opens Cal.com to select your slot

Need context first? Read the proof, case studies or subscribe to the weekly essay.

Q2 AI readiness window

Find the shadow-AI risk before it becomes policy debt.

In 20 minutes, we'll identify the department to review first, the AI usage surface you can't see yet, and whether a readiness audit, workshop, or private AI pilot is the right next step.

NDA-ready20-minute executive reviewNo tool pitchFor regulated or data-sensitive teams

Best fit: CTOs, operators, and compliance leads who need a governed first AI use case.

Review output

Your first governed AI use case

Actionable
01

First department to review

Where AI usage is already creating leverage, risk, or hidden process drift.

02

Shadow-AI exposure surface

The workflows, data paths, and approval gaps leadership cannot currently see.

03

Approval-worthy next step

A readiness audit, workshop, or private pilot scoped for governance first.

The urgency is not hype. Once teams normalize ungoverned AI habits, cleanup becomes policy debt, retraining, and slower approvals.