By Ryan McBridein·

What We Get Wrong About Coding Agents

What We Get Wrong About Coding Agents

From RPI to CRISPY: What We Get Wrong About Coding Agents

For the last couple of years, the "Research-Plan-Implement" (RPI) framework was the gold standard for using AI coding agents. The idea was simple: let the AI look at the code, write a plan, and then go build it.

But it’s 2026 now, and we have to be honest: RPI was flawed.

If you’ve ever used an agent only to realize you spent the next three days cleaning up "AI slop," you know the feeling. We thought we could outsource the thinking to the model. We were wrong. Here is the new reality of AI engineering.

The "No Slop" Mandate

In the early days, we told people "Don't bother reading the code, just trust the agent." That was a mistake.

If you are shipping production code and you’re the one who gets paged at 3:00 a.m. when it breaks, you must read the code. 10x speed is useless if you have to rip it all out and replace it in six months. The goal shouldn’t be 10x speed; it should be 2x or 3x speed with human-level craft.

2. Respect the "Instruction Budget"

We used to build these "Mega-Prompts"—monolithic files with 85+ instructions telling the AI exactly how to behave.

The problem? LLMs have an instruction budget. Once you pass about 150–200 instructions, the model enters the "Dumb Zone." It starts "half-attending" to everything, skipping steps, and ignoring your edge cases.

The fix: Stop using prompts for control flow. Use code for control flow. Break one giant prompt into five small, focused agents that each have fewer than 40 instructions.

3. Move from Horizontal to Vertical Planning

Most AI agents love horizontal planning. They want to do all the database migrations, then all the API endpoints, then the frontend. You end up with 1,200 lines of code and no way to test if any of it actually works until the very end.

We need vertical planning. Build one small slice—one endpoint, one logic flow, one UI element—and test it. Then move to the next. It’s the same amount of code, but you catch the bugs before they compound.

4. Introducing the CRISPY Workflow

Because RPI was too broad, we’ve moved to CRISPY. It’s a seven-stage process designed for human-agent alignment:

  1. Questions: The agent asks you what it doesn't know before it starts.

  2. Research: Objective facts only. No opinions, just how the code works today.

  3. Design: A 200-line markdown doc. This is where you perform "brain surgery" on the agent's logic. If the agent has a bad idea, kill it here, not in the PR.

  4. Structure: The "C Header" version of the task. Signatures and types only.

  5. Plan: The tactical step-by-step for the agent.

  6. Work Tree: Breaking the plan into manageable, vertical chunks.

  7. Implement: The actual coding.

5. Leverage vs. Outsourcing

The biggest takeaway from the last year of agent development is this: Do not outsource the thinking.

The agent is there to provide leverage. Leverage is about doing less work to get more output. Reviewing a 200-line Design Doc is high leverage; reviewing a 1,000-line AI-generated "Plan" is just a chore that people eventually stop doing.

By the time the agent actually writes the code, you should already know exactly what it’s going to do because you aligned on the Design and Structure first.

The Bottom Line

Stop looking for the "magic prompt." It doesn't exist. Instead, focus on context engineering: smaller windows, fewer instructions, and mandatory human-in-the-loop checkpoints.

Let’s make 2026 the year we stop shipping slop and start building with craft again—even if an agent is holding the keyboard.