Skip to content
All posts

The week I learned to stop watching metrics

March 30, 20263 min readDhruv Jain

There's a trap I keep falling into, and this week it caught me again.

I spent the first three days of the week checking engagement numbers on posts I'd published the previous Sunday. Likes, comments, saves, profile views. Every hour or two, I'd glance at the dashboard. Not because I needed the data for a decision. Just because the numbers were there and my brain wanted the feedback loop.

By Wednesday afternoon I realized I'd spent roughly four hours across three days doing absolutely nothing useful with that information. The posts were already published. The algorithm was going to do whatever the algorithm does. My checking didn't change a single outcome.

This is the same pattern I keep seeing in the operations work I do with clients. People build dashboards and then stare at them without connecting the data to any specific action. The dashboard feels productive because numbers are moving. But watching numbers move isn't the same as doing something that makes them move differently.

The real problem with metrics addiction

Metrics are useful exactly twice: when you're deciding what to do, and when you're evaluating whether what you did worked. Everything in between is entertainment disguised as work.

I've started applying a simple rule to myself this week. Before I open any dashboard or analytics page, I ask: "What decision will this data help me make right now?" If I don't have a clear answer, I close the tab.

It sounds almost embarrassingly obvious. But the amount of time I've recaptured from unnecessary metric-checking since Wednesday is genuinely surprising. It's probably three hours back in my week, which I've redirected into writing and client research.

What this has to do with AI

This connects to something bigger I've been thinking about. As AI tools make execution faster and cheaper, the decisions about what to execute become the bottleneck. Checking dashboards feels like decision-making because it involves data. But it's usually just procrastination with extra steps.

The people I see getting the best results from AI right now share a common trait: they spend disproportionate time on diagnosis and framing before they touch any tool. They think about what matters before they ask the machine to help. The metrics obsession is the opposite of that. It's looking at outputs without connecting them to inputs you can actually control.

This week's honest accounting

Here's what I actually got done this week versus what I planned:

Planned: Five LinkedIn posts, three newsletter ideas researched, and initial outreach to two potential collaborators. Done: Four LinkedIn posts published, one newsletter written (this one), and zero outreach because I spent the outreach time checking metrics.

I'm not sharing this to be self-deprecating. I'm sharing it because the gap between planned and actual is the most honest signal of where my process breaks. And this week, the break was clearly in how I spend the first 30 minutes of each workday.

Next week's experiment: no analytics before noon. All morning time goes to creation and outreach. I'll report back on whether it actually changes anything or whether I just find a new way to procrastinate.

If you're building something and want to follow along with the honest version of what works and what doesn't, stick around. This newsletter is where I put the stuff that doesn't fit on LinkedIn.

Request an AI Readiness Review

For CTOs, operators, department heads, and compliance leaders who need a practical path from scattered AI usage to governed adoption.

20-min review — exposure, use cases, next step
Your data stays yours — NDA on day one

Opens Cal.com to select your slot

Need context first? Read the proof, case studies or subscribe to the weekly essay.

Q2 AI readiness window

Find the shadow-AI risk before it becomes policy debt.

In 20 minutes, we'll identify the department to review first, the AI usage surface you can't see yet, and whether a readiness audit, workshop, or private AI pilot is the right next step.

NDA-ready20-minute executive reviewNo tool pitchFor regulated or data-sensitive teams

Best fit: CTOs, operators, and compliance leads who need a governed first AI use case.

Review output

Your first governed AI use case

Actionable
01

First department to review

Where AI usage is already creating leverage, risk, or hidden process drift.

02

Shadow-AI exposure surface

The workflows, data paths, and approval gaps leadership cannot currently see.

03

Approval-worthy next step

A readiness audit, workshop, or private pilot scoped for governance first.

The urgency is not hype. Once teams normalize ungoverned AI habits, cleanup becomes policy debt, retraining, and slower approvals.