What a Week With Claude Code Actually Looks Like
I've been using Claude Code to run my content pipeline for about two months now. Not as a novelty. Not as a side experiment. As the actual operating system for how content gets made, checked, scheduled, and published across LinkedIn, X, and Substack.
This week felt like a good one to pull back the curtain, because things went well and things broke in roughly equal measure.
Monday: The Setup That Saves the Week
Every Sunday night, I run a batch generation pipeline. Claude Code reads a content brief, pulls from a library of scraped viral posts, and generates seven LinkedIn posts with matching tweets and Substack notes. The whole batch lands in a JSON file. Monday morning, I review it.
This week, the batch came back clean on the first pass. That almost never happens. Usually there are two or three posts where the AI voice leaks through. You know the signs: sentences that end with "highlighting the importance of" or hooks that read like textbook chapter titles. I have a quality check script that catches most of it, but the subtle stuff still needs a human eye.
Monday's post was a Claude Code setup guide. Teaching post. Straightforward. The visual was a carousel, and generating it through Gamma took about 90 seconds. The golden hour engagement after posting went fine. Five comments in the first hour, which is roughly average for a Monday.
Wednesday: The One That Broke
Wednesday's opinion post about content-offer fit needed a text-only format. No visual. That should be the easiest day of the week.
Except the quality check flagged three banned phrases I hadn't noticed during review. One was "serves as" instead of just "is." Another was a sneaky "Not X, but Y" parallelism. The third was the word "crucial," which is on my hard ban list because it shows up in AI-generated text at something like 400x the rate of human writing.
Three minutes to fix. But without the automated check, those would have gone live. And small things like that are what make people scroll past your post thinking "this reads like ChatGPT wrote it" without being able to articulate why.
Thursday: When the System Pays Off
Thursday was a case study post about fast-moving teams. The post needed a carousel, and I had Claude Code generate a brief for Gamma, create the slides, export the PDF, and prepare the companion text. The whole flow from "I need Thursday's post" to "ready to publish" took about 12 minutes.
That used to be a two-hour process. Write the post, design the carousel manually, export it, write the companion text, format everything for three platforms. Now most of it runs through scripts that Claude Code calls in sequence.
The part that surprised me this week: the carousel had a typo on slide 4 that the quality check didn't catch because it only scans text content, not generated visuals. I caught it while doing a final scroll-through. Added "visual text review" to my pre-publish checklist. That's the self-annealing loop in action. Something breaks, you patch the system, and it doesn't break that way again.
Friday: Right Now
Today is a handraiser post. AI prompts for business operations. The CTA invites people to drop a reply, and then I send the resource via DM. Every commenter enters the pipeline: resource delivered, diagnostic question 48 hours later, value follow-up after a week.
That pipeline used to be a spreadsheet I checked manually. Now it runs through a SQLite database that tracks every lead's stage. Claude Code handles the scanning, the DM drafting, and the follow-up scheduling. I still review every DM before it sends, because automated outreach without a human check is how you end up sounding like a bot.
What I've Learned So Far
The thing nobody tells you about using AI as an operations layer is that the AI is maybe 30% of the work. The other 70% is writing down your own processes clearly enough that a machine can follow them.
I have a directives folder with markdown files that describe every workflow. Content generation, quality checks, posting schedules, DM sequences, engagement routines. Claude Code reads those files and follows them. When something goes wrong, I update the directive, not the AI.
That mental shift matters. You stop thinking "how do I prompt this better" and start thinking "how do I describe what I actually do." The second question is harder and more useful.
One Honest Admission
It's not all smooth. About once a week, something fails in a way that takes 30 minutes to debug. A script times out. An API changes its response format. The browser automation loses its session. These aren't glamorous problems, and they don't make good LinkedIn posts. But they're real.
The system is net positive by a wide margin. I'm publishing across three platforms daily with consistent quality, and the time investment is maybe 90 minutes a day including engagement. Before this setup, content alone ate 3-4 hours.
But if anyone tells you AI operations are "set and forget," they're selling something.
I put together a Claude Code cheat sheet with the commands and patterns I use most. If that's useful to you, reply to this email and I'll send it over.
Or just reply and tell me what you'd automate first if you had a system like this. Genuinely curious.