You're Automating the Wrong Thing
Every team picks the wrong process to automate first, and the reasons why have nothing to do with technology.
The Problem
The automation conversation in most companies starts in exactly the wrong place. Here’s what usually happens. Someone on the leadership team sees a demo. Or reads an article about how Company X automated their entire onboarding flow and saved $200K a year. Or a vendor sends a cold email with a case study that sounds too good to ignore. And so the team picks a process to automate based on enthusiasm, not evidence.
The process they pick is usually the one that feels the most painful. The one people complain about in meetings. The one that sounds impressive when you describe it to investors or your board. “We’re automating our proposal generation pipeline.” “We’re building an AI-powered customer support flow.” These sound great. They rarely deliver the biggest return.
Between 30 and 50 percent of RPA projects fail, according to research from Ernst & Young. That’s not because the technology doesn’t work. It’s because teams automate the wrong thing, or automate a process they don’t fully understand yet. The tool works fine. The selection was broken from the start.
This problem isn’t new, but it’s getting worse. With AI tools getting cheaper and more accessible, the temptation to automate everything at once is stronger than ever. And the teams that resist that temptation and pick the right first target are the ones that actually see returns.
I think about this a lot because I see it constantly in the teams I work with through my consultancy. The pattern is almost always the same. A founder or ops lead has a list of five things they want to automate. They start with the one that’s most visible or most annoying. Three months later, the automation is “working” but nobody can explain how much time it actually saved. Meanwhile, the process that was silently eating 600+ hours a year across the team hasn’t been touched.
The Reframe
The insight that changes this is simple, but it runs against every instinct most operators have: the best process to automate is almost never the one that frustrates you the most. It’s the one that quietly consumes the most total hours across your team with the highest error rate.
Frustration and impact are different things, and confusing them is where most automation strategies go sideways. A process can be incredibly annoying but only take 15 minutes a week. Another process can be so routine that nobody even thinks about it, but it eats 12 hours a week spread across three people. The annoying one gets attention in meetings. The quiet one gets ignored. But the quiet one is worth 624 hours a year. At even a modest fully-loaded cost of $35 per hour, that’s nearly $22,000 a year in labor spent on one process that a simple automation could handle.
This happens because of how humans evaluate work. We weight our emotional response to a task much more heavily than the actual time it takes. Behavioral economists call this the affect heuristic. If a task feels bad, we overestimate its cost. If a task feels neutral, we underestimate it. And when leadership asks “what should we automate?”, the answers they get back are filtered through this bias.
There’s also an incentive problem. The person proposing an automation project wants it to sound impressive. “We’re going to automate data reconciliation across three spreadsheets” doesn’t get the same reaction as “We’re going to automate our entire client reporting workflow.” One of those sounds like a board-ready initiative. The other sounds like a spreadsheet cleanup. But the spreadsheet cleanup might save 10x more hours.
McKinsey estimates that knowledge workers spend up to 60% of their time on routine admin tasks instead of high-value work. A survey from Smartsheet found that workers waste roughly a quarter of their workweek on manual, repetitive tasks. The time is there. It’s just hiding in the processes nobody talks about.
The organizations that get automation right don’t start with the sexiest process. They start with the most expensive one, measured in actual hours and actual errors. And they figure out which one that is before they ever talk to a vendor or sign up for a free trial.
There’s a third layer to this that’s worth naming: organizational politics. In most companies, the person who gets to pick the automation target is the person with the most influence, not the person with the best data. The VP of Sales wants to automate the sales pipeline. The Head of Marketing wants to automate reporting. The CFO wants to automate invoicing. None of them are wrong, exactly. But none of them are making the decision based on which process costs the most hours. They’re making it based on what falls under their department. The audit fixes this because it replaces opinion with measurement. When you can show leadership a spreadsheet that says “this process costs us 1,200 hours a year and has a 15% error rate,” the politics mostly disappear. Numbers are harder to argue with than feelings.
The Evidence
I posted earlier today on LinkedIn about a 5-step process for diagnosing which process to automate first. The framework is straightforward: list your processes, time them, multiply by frequency, check error rates, and pick the winner based on the math.
But the interesting part isn’t the framework. It’s what happens when teams actually do it.
The example I used was a team choosing between automating proposal generation and data reconciliation. Proposal generation felt more important. It was visible, client-facing, and everyone complained about it. But when they actually measured, proposal generation took about 2 hours a week for one person. Data reconciliation took 12 hours a week spread across three people. That’s 104 hours a year versus 624 hours a year. The boring process won by a factor of six.
This pattern repeats everywhere. The processes that eat the most hours are usually the ones that involve moving data between systems, reconciling numbers across spreadsheets, generating the same reports with slight variations, or sending follow-up communications that follow a predictable template. They’re invisible because they’re woven into the daily routine. Nobody flags them as problems because “that’s just how we do it.”
Error rates make the gap even wider. Manual data entry between systems has a well-documented error rate somewhere between 1% and 5% per field, depending on complexity. Those errors create rework. Rework creates delays. And the cost of rework is almost never tracked, so it doesn’t show up in anyone’s time estimates. When you add error-adjusted hours to the calculation, processes that looked like mid-priority targets suddenly jump to the top of the list.
There’s also a compounding effect that most teams miss. When you automate a high-frequency, high-error process, you don’t just save the direct hours. You save the downstream hours that were being spent on fixing mistakes, answering questions about discrepancies, and re-running reports that had bad inputs. A process that takes 15 minutes but creates 45 minutes of rework 20% of the time isn’t a 15-minute process. It’s a 24-minute process on average, and the variance is what kills your team’s ability to plan their day.
Gartner projects that 40% of agentic AI projects will fail by 2027 due to escalating costs, unclear business value, and inadequate risk controls. The “unclear business value” piece is the one that should get your attention. If you can’t point to a specific number of hours saved or errors eliminated, you don’t have a business case. You have a science project.
I want to name something else that I don’t see talked about enough: the sunk cost trap of bad automation. Once a team has spent three months and $20,000 automating the wrong process, they’re not going to admit it was the wrong call. They’ll keep investing. They’ll add features. They’ll hire someone to maintain it. The cost of the original bad selection doesn’t stop at the initial build. It compounds. Every month you spend maintaining a low-impact automation is a month you’re not spending on the process that actually matters. This is why the selection step is the most important step. It’s not a warm-up for the real work. It is the real work. Everything downstream is just execution.
And there’s a cultural dimension too. When a team’s first automation project fails or underwhelms, it poisons the well. People become skeptical. “We tried automation and it didn’t really help.” That skepticism makes it harder to get buy-in for the next project, even if the next project is the right one. The cost of picking the wrong process first isn’t just the wasted money and time on that project. It’s the organizational resistance you create for every project after it.
The Application
If you’re thinking about automating anything on your team right now, here’s what I’d actually do.
First, stop talking about tools. Don’t look at software yet. Don’t compare vendors. Don’t watch demos. All of that comes later.
Instead, spend two days running a simple audit. Ask every person on your team to list their recurring tasks, estimate the time per instance, note how often they do it, and flag how often it produces errors someone has to fix. Compile it into a spreadsheet. Calculate the annual hours. Sort descending.
The top 3 processes on that list are your automation candidates. Everything else can wait. And when you do start evaluating tools, you’ll be evaluating them against a specific process with specific requirements, not shopping for a solution to a problem you haven’t clearly defined.
Second, resist the urge to automate a broken process. If a workflow has unnecessary steps, redundant approvals, or handoffs that exist because “we’ve always done it that way,” fix those first. Automating a bad process just means it runs badly at machine speed.
Third, start small. The best first automation project is one that’s contained, measurable, and boring. If it works, you have a proof of concept and a real ROI number to justify the next one. If it doesn’t work, the blast radius is small. Either way, you learn something.
Fourth, measure after you ship. Most teams build the automation and then move on. Don’t. Track the actual hours saved for 30 days after launch. Compare to your audit numbers. Were you right? Were you off? By how much? This feedback loop is what separates teams that get better at automation over time from teams that keep guessing. If your first project saved 80% of the hours you predicted, your audit process is working. If it saved 20%, something in your measurement was off and you need to figure out what before you pick the next target.
Finally, don’t automate everything. Some processes are better left manual. If a task requires judgment, changes every time, or involves sensitive client communication, a human should probably still do it. The point of the audit isn’t to automate your entire company. It’s to find the 3 to 5 processes where automation delivers the most hours back to your team, and focus there.
The teams that win at automation aren’t the ones with the best tools. They’re the ones who picked the right process first.
I put together a free audit template that walks through this exact process. Reply “AUDIT” and I’ll send it over.
PS: Curious what you’d find if you ran this audit on your team this week. The answer is almost always a process you forgot existed. If you do run one, reply and tell me what surfaced. I read every response.