I asked the client a simple question over Trello yesterday afternoon. We had been ratcheting up accuracy on a manual task, week over week, from 44 percent in the first pass to about 90 percent in the third. The AI workflow was clean. The output looked right.
So the question I sent through the Trello card was: Looks like we are good now. Is the task done?
The reply came back: Yes, this part is good. But what about the other part I am doing manually in this other application?
I had no idea what she was doing in the other application. I presumed the application was doing the work there, and not them.
AI workflows for operations are everyone's pitch right now, mine included. The pitch falls apart the moment you discover that the workflow you mapped is one of two parallel workflows, that the second one lives in software you have never seen, and that the hours you just saved your client are not the hours they actually wanted back. That single Trello reply blew apart an assumption I had been quietly making for three weeks.
This is the trap I keep hearing about and now I have walked into it myself. AI fractional people are easy to find on LinkedIn at the moment, and we all sound the same. We talk about Opus and Sonnet, claim PhD-level reasoning at a fraction of the salary, and namedrop the Silicon Valley CTOs who say they have not written a line of code in months. A friend of mine who is an actual CTO told me last week that even he has stopped writing code, and that someone like me and someone like him are nearly on a level playing field when it comes to shipping product ideas. Take that quote at face value and the AI generalist looks like a powerful new role.
The role is real. What gets missed is the assumption underneath it: that the AI generalist understands what the human is actually doing. Not the part of the work that fits in a Trello card. The whole shape of the day. The list of applications open in the background. The spreadsheet that gets a second copy paste before the email goes out. The second piece of software that nobody mentioned because they assumed I knew. The process behind the process.
I had been resisting an on-site for the last week. Someone close to the engagement floated the idea that I just go there and fix it in person. The problem with an on-site is the sunk cost. Book three days of travel, get there, find out the missing piece was a five-minute Loom of someone moving their hands across a keyboard, and I have two empty days in a city I did not particularly want to fly to. The on-site I will keep for batching, not for missing-piece-of-the-puzzle hunts.
What I had to give up was something more annoying. I have been running a strict version of the Brett Williams Designjoy comms rule since I started this practice: client comms through Trello and Loom, async by default, no calendar invasions, no useless meetings, no Slack channel I am pinned to all day. I love that rule. I have written about it in posts and on the homepage. I have used it to claw back a working day that looks like a life.
The rule fits design. A client sends a brief, the designer sends a comp, the brief and the comp are the workflow. There is nothing hidden behind the brief. AI ops is the opposite. The brief is a sketch and the actual workflow lives in the operator's hands and across applications I have never seen. If I default to async only, I get a sketch of a workflow and an AI fix for a sketch.
So I broke the rule yesterday afternoon, on the same Trello card the misunderstanding lived in. I gave my personal cell. Told her to call when something got fiddly, no calendar block needed, no agenda. I booked a screen-share working session for later this week so I can sit with her while she does the actual job and watch the actual job, not the version of it that fits inside a Trello message. Same engagement that produced the Madrid-to-Geneva personal-AI-assistant spec on the plane home a couple of weeks back. Different surface, same logic: get closer to the human work, not further from it.
The deeper thing I am working out is what the lane is. In three years the AI tool stack is going to be a commodity. Everyone in a knowledge job will know how to wield it. The "AI expert" badge will mean about as much as "I can use Excel" did in 2010. What is left is two things. Creative problem solving, and being the bridge between the way the human actually works and the way the AI can be made to work. The bridge is built out of conversations. Recorded calls, screen shares, the dumb question I should have asked three weeks ago. Trello is a poor surface for it.
I went back to the kickoff transcript from day one of this engagement this morning. The three-stage workflow was there in the client's own words. My own Phase 1 deliverable from a week into the project had named a Phase 2 and a Phase 3 explicitly. The path was scoped. I had built Phase 1, called it the project, and forgotten my own roadmap.
The vault is the other half of the bridge. Every kickoff transcript, every phase note, every off-hand dream scenario the client described in week one, indexed in a folder I can grep in twenty seconds. AI is only as good as the retrieval. If the day-one notes are not in the AI's context window, the AI will confidently cosign whatever subset of the workflow I happened to be staring at this week. Three weeks of pointing it at the wrong subset is what that looks like in practice.
What happened here was that I was not following my own rule. Codify the rule and close the loop. I had the workflow codified, but I hadn't closed the loop and the persistent state memory I brag about, well, it was broken and I look like a fool. Only to myself at this point, because I can recover quickly and I will after I do this human to human exchange to get the details. In this case, the devil really is in the details.
I might still be wrong about the second process. It might turn out the missing piece is small enough that one Loom and an API key close it. It might turn out the second application has no API and we have to build a small middleware layer to plug the gap. I will know more by the end of today. The lesson is the same either way. Map the actual workflow first, both halves of it. Build the AI fix second. One question I should have asked on day one would have spared three weeks of partial mapping.
The rule, then, is async by default, voice on demand. Designjoy fits design. AI ops needs a small carve-out for the human bridge.
Monthly Revenues $11,800 | Clients 2 | Prospects 2 | Employees: me
Day 36 of 365.