⚡ Powered by Finn · Day 49 of 365
049

Three Smart Execs, One Vibe Coder, and How to Hire a Fractional AI Lead

Three replies landed in the same email thread this morning, and three versions of the same question underneath them: how to hire a fractional AI lead. None of them said the phrase out loud. All three of them needed the answer.

The thread had started with one of my client's founders (a domain expert in financial research and investing) sending through what he'd built over the weekend. Smart guy. He'd vibe coded the first internal AI tool for the company on his laptop in a Python sandbox. No GitHub, no tests, no multi-tenant setup, no proper hosting. Three senior leaders on the thread replied separately within an hour. Each reply said a version of the same thing: this is harder than we thought, and the average knowledge worker has no chance of doing what he's doing.

I read the replies twice, then looked at the code. We're going to redo most of it. He knows. I know. And I'm letting him keep building.

Why I'm letting the vibe coder cook

This person is the closest to the business problem. He understands the workflow he's automating better than I do and better than any outside developer would. If I stop him now and take over the project myself, I may be able to bring together a well-tested thing that solves the wrong problem. He knows what he needs, knows what the company needs much better than me. The best thing I can do is get out of his way, and let him build the proof of concept to see what they actually want built.

The cheapest way to find out what v2 should do is to let him ship a working v1 in his sandbox. The cost of rebuilding the working bits in a proper environment after the fact is about a week. The cost of misunderstanding what v1 should have been could be months. I lived the months version a year ago on my own KPI dashboard and I'm not doing it again.

What gets thrown away when we rebuild it

Local-only Python on one laptop. No source control, so the next developer can't read what he's done. No multi-tenant data isolation, which matters as soon as more than one human is reading the output. No environment parity between his machine and any production target. No audit trail, no rollback, no tests.

We'll rebuild on GitHub, separate dev and prod databases, write the tests as we go, run it in a managed environment with proper tenancy and the right Microsoft 365 controls around it. None of that is exotic. All of it is invisible from the outside if you've never built software for more than one user. But the most important thing in this exercise is that he is building a report, in this case our proof of concept, exactly to his liking. Something that we would have to do together for possibly weeks. We will be throwing out most of what he is building, but the output of the report is the only thing we keep.

The tricky part now is how to politely tell him that all of the work that he is doing can't be used. Not something that comes via email, but needs to be discussed via a call to make a plan on how to take what he's built and bring it into the proper development environment.

The thin harness, fat skills idea is the same one I keep yammering on about. The prompt is not that difficult, despite what these three clearly competent execs declared. The production-grade plumbing around it is not.

Three brilliant operators, none of them would touch this

Back to the three replies. Not slow people. Two decades each in their industry. Sharper than me in every dimension where domain knowledge matters. None of them would dream of doing the complicated setup to get things right. Not because they couldn't learn it. Because they correctly read that the gap between "I built a thing on my laptop" and "we run this for the company" is about twelve disciplines wide, and most of those disciplines are invisible from the outside.

Do I need a Chief AI Officer? (Or just a fractional one?)

Most C-suites I talk to ask this question wrong. They look at the AI category and assume the answer is a senior full-time hire. A senior AI executive in 2026 costs $250K to $400K loaded. You probably need them ten to twenty hours per week. You'll also pay for the next twelve months of them learning your business while drawing the full salary, because the only person who knows your operation is sitting in your operation.

A fractional Chief AI Officer is the same brain on a custom retainer in your currency, four to six hours a day, four days a week, with the option to scale up or down inside thirty days. The retainer is custom because every operation is different. I keep a fixed $4,995/month productised tier for businesses that want a single channel for one task at a time. Either way the math beats a full-time hire until you're past about $30K of monthly AI-relevant work, which most operations are not.

What a fractional AI lead actually does

Picks the right tool out of a moving market that resets every quarter.

Decides which parts of a build belong on a laptop and which need a proper environment.

Runs the human who is doing the building, whether that human is a staff engineer, a vibe coder, a contractor, or one of the founders willing to try and learn the basics.

Reads the difference between "we got a demo working" and "we shipped."

Sets the cadence between exploration and production so neither one starves the other.

Translates what the engineer says back to the CEO in a sentence the CEO can act on.

Holds the line on security, audit, and what data goes where, because the engineer is not going to.

Knows when to call in a 9-out-of-10 specialist for a six-week piece of work, and how to manage them when they arrive.

I'm a 5 out of 10 in fifteen areas of business. I'm a 9 out of 10 at stitching those eight competencies into one rhythm a client can buy a few hours of, four days a week. I called the role fractional Chief AI Officer last month on the pillar page. The search term most C-suite teams actually type into Google is "how to hire a fractional AI lead."

How to hire one without getting taken

Twenty years of running real businesses, not twenty years of advising them. Someone who has run payroll, written invoices that didn't get paid, and made a hiring decision they had to live with.

Someone who can read code well enough to scope it but isn't pretending to be the senior engineer. The pretending part is where most AI consultants fall over. They were marketers who learned to talk about AI, not operators who learned to hire and manage engineers.

Someone current on the model landscape this quarter, not a year ago, because the right answer changes that fast. If they were the right answer in 2024 and haven't shipped anything new since, they're stale.

A direct communicator. Allergic to jargon. Will tell you the part of your idea that doesn't work before they tell you the part that does. Won't write a deck unless you ask for one.

Someone who has put their work on public record. A daily build log. A pillar page with real numbers. A track record you can read in their own voice. If you can't find anything they've built and shown the world, you can't tell the difference between a practitioner and a packaged consultant.

The three replies in my inbox this morning are the proof of why this role exists. Three operators with more business experience between them than most boards, looking at one weekend of vibe coding and correctly naming it as a problem they can't personally manage.

You don't need a full-time hire. You need a fractional brain who has done this before, can scope what you actually need, and can manage the people inside and outside your business who are going to build it.

The proceeds from these engagements seed the Finn Wardman World Explorer Fund.

Monthly Revenues $11,800 | Clients 2 | Prospects 1 | Team: Me + Jan (CTO)

Day 49 of 365.

← Day 048 All posts

Follow the BIP

See if this is the right fit.

15 minutes. No pitch deck. No pressure. Just a conversation about what's eating your time.

Schedule a call