The three-role model: a better way to build software with AI

There is a failure mode that almost everyone hits when they start building seriously with AI coding assistants. It goes like this: you have an idea, you describe it, the AI starts writing code, you react to the code, the AI adjusts, you react again, and somewhere around the third or fourth iteration you realise the thing being built has drifted significantly from what you actually wanted. The code is fine. The direction is wrong. And because the AI is responsive and capable, it just keeps going — confidently, productively, in the wrong direction.

The root cause is a role confusion that nobody talks about. When you use an AI coding assistant directly, you are simultaneously the person with the vision, the person making technical decisions, and the person evaluating the output. That is three jobs. Doing all three at once while also steering a conversation means none of them get the attention they deserve. The vision gets compressed into prompts. The technical decisions get made reactively. The evaluation happens too late.

The fix is to separate the roles explicitly, even when one human is playing two of them.


The architect

The architect’s job is to hold the full picture and never let it blur. Before any code is written, the architect asks the hard questions: what problem does this actually solve, what are the edge cases, what does this mean for everything else in the system, what are we not building and why. The architect writes specifications that are complete enough that a capable developer could implement them without making any strategic decisions. Ambiguity in the spec becomes a bug later — the architect’s job is to eliminate it upstream.

Crucially, the architect does not write code and does not react to code. Reacting to code is how drift starts. The architect’s mental model stays at the level of behaviour and intent, not implementation.

In a human team this role belongs to a senior engineer or technical lead. In an AI-assisted workflow it can belong to an AI — but only if the AI is given the full context of the system, the freedom to ask questions and push back, and the explicit mandate to hold the line on design decisions. An architect AI that just says yes to everything the owner suggests is useless. The value is in the friction.


The developer

The developer executes. That is the entire job description. Given a complete, unambiguous specification, the developer writes the code, runs the tests, reports back on what was done and whether anything unexpected came up. The developer does not make strategic decisions. If the spec says “do X”, the developer does X and reports. If the spec is unclear, the developer flags it rather than guessing.

This is exactly what a capable AI coding assistant is genuinely excellent at. The models available today can implement complex, multi-file changes reliably when the instructions are precise. What they are not good at — and should not be asked to do — is deciding what to build. That is not a limitation to work around. It is a role boundary to enforce.

The separation has a practical benefit beyond just quality. When the developer’s job is purely execution, the audit trail is clean. You know exactly what was asked and exactly what was done. When the developer is also making design calls, you end up with a codebase full of implicit decisions that nobody explicitly made and nobody fully understands.


The owner

The owner is the human. The owner has the context that neither the architect nor the developer can fully possess: the real-world use case, the personal preference, the lived experience of using the thing that is being built. The owner is also the final decision-maker on anything strategic — not because the owner is always right, but because it is the owner’s system and the owner’s consequences.

The owner’s job in this model is to relay, decide, and provide context. When the architect raises a design question, the owner answers it. When the developer reports back, the owner relays the result to the architect. When a decision point emerges — a fork in the road where two reasonable approaches exist — the owner makes the call.

What the owner explicitly does not do is override the architect on technical grounds without a reason, or task the developer directly without going through the spec process. Those shortcuts are where drift lives.


The backlog as shared memory

The three roles only work at scale if there is a persistent record that all three can access. A conversation history is not enough — it is linear, it degrades over time, and it disappears when the session ends. What you need is a backlog: a live list of what has been decided, what has been built, what is planned, and what has been deliberately deferred.

The backlog serves a second function that is easy to underestimate. It forces decisions to be made explicitly. When you have to write down “we are not building X yet, and the reason is Y”, you have to actually have the conversation about X. Things that drift in through the back door — scope additions, quiet assumption changes, features that accumulate without being designed — cannot survive a well-maintained backlog. Everything that exists has to have been decided.

At session boundaries the architect reviews the backlog, records what was completed, and queues what comes next. The next session begins not from a conversation summary but from a structured state. This is how you maintain coherence across weeks of work without losing the thread.


What this model changes

The honest answer is that this model slows down the first few minutes of any session. There is more ceremony than just opening a chat and starting to type. The architect has to think before speccing. The owner has to make decisions rather than just react. The backlog has to be maintained.

What it buys you is a system that stays coherent. The individual pieces continue to make sense together even as the codebase grows. Technical debt that gets introduced does so deliberately, with a note, not accidentally because a prompt was ambiguous. The owner can step away for a week and come back to a backlog that accurately reflects the state of the world rather than having to reconstruct it from a chat history.

The deeper thing it buys you is confidence. Not confidence that every decision was right — some will turn out to be wrong and that is fine — but confidence that every decision was actually made, by the right person, with enough information. That is rarer than it sounds, and it is worth the ceremony.