Skip to content
All posts

Why Most Companies Fail at AI (And the 30-Minute Fix)

April 6, 20268 min readDhruv Jain

Hormozi dropped a video last week called “How to Win With AI in 2026.” Most of the internet will skip it because the title sounds like every other AI video on YouTube. But buried about halfway through is a framework that I think changes how you should think about AI adoption entirely.

The core idea is to stop thinking about your organization in terms of roles and start thinking about it in terms of workflows. That sounds obvious when you say it out loud. It isn’t obvious when you watch how companies actually try to adopt AI, because almost all of them get it backwards.

The way most companies try to adopt AI

Here’s what typically happens. Someone in leadership reads an article, attends a conference, or watches a competitor announce an AI initiative. They come back motivated and they assign a team to explore it. The team researches platforms, compares features, evaluates vendors, and eventually picks something to pilot. They run the pilot on whatever problem someone volunteers, usually something low-stakes and visible.

Three months later the pilot technically succeeded. The tool works fine. But nothing in the actual workflow changed, and the company is no closer to getting real value from AI than it was before the project started.

This happens because the starting question was wrong. They asked “which AI tool should we use?” when they should have asked “which specific task in which specific workflow is costing us the most right now?” Those are very different questions and they lead to very different outcomes.

Decomposing roles into workflows

Hormozi’s framework is simple once you hear it. For every role in your organization, write down the four to eight tasks that person actually does. Not the job description and not the responsibilities listed on their LinkedIn profile. The actual tasks — the things they do with their hands and eyes and time during a normal work day.

Then look at each task individually and ask whether it could live inside a workflow rather than inside a person’s calendar.

His example is editing. “I need to hire an editor” becomes “what are the six tasks that produce a finished video?” Ingesting raw footage. Cutting a rough assembly. Color correction. Adding captions. Exporting in the right format. Uploading and writing the description.

Some of those tasks still need a human. The creative editorial decisions, the taste-based choices about pacing and tone — those require judgment. But at least three of those six are mechanical processes that follow predictable rules every single time. Those are workflow tasks, not judgment tasks.

When you look at a role this way, the picture changes completely. You’re not replacing the editor. You’re removing the mechanical parts of their job so they can spend all their time on the work that actually benefits from human judgment. The person stays. The tedious work goes away. And the total output goes up because the person isn’t spending half their week on tasks a workflow could handle.

This is how the best AI adoptions work in practice. Nobody loses a job. The job just becomes the better version of itself.

The 30-minute diagnostic

I’ve been testing a simple diagnostic process that works in about thirty minutes and consistently finds the highest-value automation opportunity in any workflow. It has five steps.

Watch someone do the work. This is the part most people skip, and it’s the most important part. Don’t ask them what they do — watch them. People describe their work in terms of outcomes and responsibilities because that’s how they think about it. But the inefficiency lives in the actual motions: the clicks, the tab-switching, the copy-pasting, the waiting for someone to respond to an email before the work can move forward.

Ask what breaks. Every workflow has a fragile point. A spreadsheet that someone has to update manually every time something changes. An approval that sits in someone’s inbox for two days because they’re busy with other things. A data transfer that requires exporting from one system and importing into another because the two systems don’t talk to each other.

Follow the approval chain. Most workflow delays aren’t caused by the work itself. They’re caused by the work sitting in a queue waiting for a human to look at it and move it forward. Map who touches the work between when it starts and when it’s done. Every handoff between people is a delay, and every delay has a cost.

Calculate the real cost. Take the number of people involved, multiply by the hours per week, multiply by their fully loaded hourly cost including overhead. This number is almost always larger than people expect it to be. A three-person compliance team spending four hours each on Friday reconciliation is 624 hours per year. At $75 an hour fully loaded, that’s about $47,000 annually on a single repetitive task.

Rank by dollar impact, not by who complains loudest. The most valuable automation target is rarely the one that generates the most complaints. It’s the quiet one — the process that consumes a lot of labor every week without anyone noticing because “that’s just how we’ve always done it.”

The whole thing takes about half an hour. No fancy tools, no consultants, no software evaluation. Just observation and arithmetic. The bottleneck is almost always obvious once you actually sit down and look at the work.

Training AI like an employee, but faster

Once you’ve identified the bottleneck, the next mistake most companies make is treating AI like a vending machine. Put a prompt in, get an answer out, judge the answer. If the answer is bad, conclude that AI doesn’t work for this use case.

Hormozi’s reframe here is worth paying attention to. He says to treat AI the way you would treat a new hire who learns a hundred times faster than a normal person.

If you had a brand new employee and their first deliverable wasn’t great, you wouldn’t fire them on the spot. You’d give them examples of good work. You’d explain what the rules are. You’d review their output and give specific feedback about what to change. Then you’d repeat that loop until they consistently got it right.

The difference with AI is speed. With a human employee, a hundred feedback loops takes twelve to eighteen months. With AI, a hundred feedback loops takes about a hundred minutes.

His practical framework: give the AI twelve rules it can never break, sixteen examples of your actual work, and then iterate a hundred times. The output after a hundred iterations is dramatically different from the output after one.

Most people stop after one. They try a generic prompt with zero context, get generic output, and quit. Then they tell their colleagues that AI isn’t ready for serious work. What they actually discovered is that untrained AI produces bad results, which shouldn’t surprise anyone.

The companies that are getting real value from AI right now are the ones that invested the training time. It’s a hundred minutes, not eighteen months. The ROI on that time investment is hard to beat.

The barbell strategy

Hormozi advocates what he calls a barbell strategy for thinking about AI, and I think it’s the right mental model for most businesses.

On one end: go all in. Automate every workflow you can. Have the uncomfortable conversations with your team about leveling up and using new tools. Be willing to restructure roles when the work genuinely doesn’t require a human anymore. Build with AI from the ground up wherever possible.

On the other end: make sure your bets are anchored to things that won’t change. People will still have bodies, so healthcare and fitness will matter. People will have more free time as work gets automated, so entertainment will grow. People will still need to eat. Those industries aren’t going anywhere regardless of what happens with AI.

The dangerous place to be is in the middle. “We’ll adopt AI eventually” is a losing strategy, because your competitor who adopted it six months ago is already running a leaner operation with better margins. Eventually is too late when the environment is moving this fast.

The price sensitivity window

One more idea from the video that I think is underappreciated, especially by anyone running a service business.

Customer price expectations adjust slowly. If people are used to paying $2,000 a month for a particular service, that number was set when delivering the service cost a certain amount in human labor. The cost of delivery has dropped dramatically for a lot of services because of AI, but customers haven’t updated their expectations yet.

That gap between what customers are willing to pay and what it actually costs to deliver is a massive margin opportunity for anyone who can see it. It won’t last forever — prices always adjust to reflect new cost structures eventually — but right now the window is open and it’s wide.

If you’re building a service business or rethinking your delivery model, understanding this dynamic and moving on it while the gap exists is probably the highest-ROI thing you can do this year.

What to do with this

If I were reading this newsletter on a Monday morning and wanted to take one concrete action, here’s what I’d do.

Pick one workflow in your business or on your team. Not a role, not a department — one specific workflow that takes someone more than two hours a week. Sit with that person and write down every individual step, not the category but the actual steps: “open spreadsheet A,” “copy column B,” “paste into system C,” “wait for manager to approve.”

Look at the list and circle the steps that follow predictable rules. Those are your automation candidates. Take the first one and start this week.

The compound effect of automating one small task per week adds up fast over twelve months. Most people overestimate what they can do in a week and underestimate what they can do in a year with consistent small improvements.

I built a free framework for this exact process. It walks you through identifying, ranking, and automating workflow bottlenecks step by step. It’s called the SYSTEM Framework.

You can grab it here: dhruvjain08.gumroad.com/l/system-framework

And if you’d like someone to run the 30-minute diagnostic with you, reply to this email. I do a handful of these each month.

Request an AI Readiness Review

For CTOs, operators, department heads, and compliance leaders who need a practical path from scattered AI usage to governed adoption.

20-min review — exposure, use cases, next step
Your data stays yours — NDA on day one

Opens Cal.com to select your slot

Need context first? Read the proof, case studies or subscribe to the weekly essay.

Q2 AI readiness window

Find the shadow-AI risk before it becomes policy debt.

In 20 minutes, we'll identify the department to review first, the AI usage surface you can't see yet, and whether a readiness audit, workshop, or private AI pilot is the right next step.

NDA-ready20-minute executive reviewNo tool pitchFor regulated or data-sensitive teams

Best fit: CTOs, operators, and compliance leads who need a governed first AI use case.

Review output

Your first governed AI use case

Actionable
01

First department to review

Where AI usage is already creating leverage, risk, or hidden process drift.

02

Shadow-AI exposure surface

The workflows, data paths, and approval gaps leadership cannot currently see.

03

Approval-worthy next step

A readiness audit, workshop, or private pilot scoped for governance first.

The urgency is not hype. Once teams normalize ungoverned AI habits, cleanup becomes policy debt, retraining, and slower approvals.