April 21, 2026 | 5 min read

You’re Vibe Coding Wrong

I don’t mean your prompts are bad or that you’re using the wrong model. Your entire workflow is fundamentally broken. Let me reframe how you vibe code.

Here’s what most people do: they open a session, start prompting, hit a wall, and then spend the next hour course correcting, begging the model to do better with “try again” or “that works but this other thing broke”. Then they realize the model’s just getting dumber, taking more time, and vibe coding doesn’t feel that fun anymore. Frustrated, they blame the model and rage quit.

The model isn’t the problem. You’re just approaching vibe coding wrong. Vibe coding isn’t building, it’s search.

The game has changed

Before, coding was expensive. Building even a simple demo took hours, so you planned carefully, designed upfront, and made every line of code count. Progress was linear and deliberate. You knew where you were going because you had to.

Now, coding is cheap. The cost of throwing away an attempt and starting over is basically zero. A failed attempt isn’t a setback, it’s information. And that changes the game, because when attempts are cheap, you don’t need to know where you’re going before you start. You just need a direction.

That’s why vibe coding is search. You have a vague sense of what you want. You can’t fully describe the destination, but you’ll recognize it when you see it. Each attempt teaches you something — not just about the code, but about what you actually wanted in the first place. Your intent and the execution get clearer at the same time as you explore the solution space.

But there’s a catch: the longer your back-and-forth with the model, the worse it performs. This is called context rot: when model performance degrades as context grows because attention gets spread across too many tokens. For Claude’s 1M context window, this kicks in at around 300k tokens.

So if vibe coding is search, most people are searching wrong. They’re exploring one long path, course-correcting as they go, while the model gets dumber with every turn. The fix is simple: close the session, start a new one, and take another shot at goal with everything you just learned.

Loading...

Sound familiar?

If you’ve dabbled in reinforcement learning, this should look familiar. You’re the agent and there’s a reward function you can’t articulate yet. Each rollout is a sample from the policy, and what comes back reshapes your understanding of what you’re even optimizing for.

In other words, it’s not just that you don’t know the path (how do I implement it). You don’t fully know the destination either (how should it even look). You figure it out by seeing what you get. Each attempt not only moves you closer to the goal, but it sheds light on what the goal actually is. Your intent and the execution converge at the same time.

The optimization loop isn’t your conversation. It’s rollout, see the result, rollout again. Each one takes mere minutes and each one starts fresh. Do this quickly and a lot and you’ll get to your goal.

What this actually looks like

A few months ago, I was building an iPad app that uses biomechanical analysis to improve your volleyball approach. It took eleven Claude Code sessions.

The first two were pure discovery. I talked with the model about what the app should look like, what it should do, if it’s even possible. There was no code and no architecture. By the end I had a rough map of the problem space.

The next three were planning. Each session I described what I wanted with a bit more specificity and let the model push back or fill in any gaps. Each rollout uncovered things I hadn’t considered. After the third, I had a good sense of the architecture and the technical risks.

By the time I started generating code, each session was close to one-shot with a few follow-ups. When I hit errors, I’d give the model one or two chances to correct itself. If it couldn’t, I stopped. I’d feed the errors into the next session with a fresh context, and the model would fix it flawlessly on the first try.

A clean one-shot is a signal that your understanding of the problem has converged enough that the model can execute it in one pass. As a rough heuristic, if I’m going back and forth more than five times, I scrap the session and start a new one with my learnings.

Takeaway

The hardest part of this workflow is closing the session. You’ve been going back and forth for twenty minutes, the model has context on everything you’ve tried, and starting over feels like throwing away progress. That’s sunk cost fallacy. That context isn’t helping you; it’s actively hurting you.

If you take one thing from this, it’s this: start from scratch more often than feels comfortable. Your next rollout, with a clearer prompt and a clean context, will outperform the twentieth message in a dying session every time.

The above example is a side project, and of course the math changes on a large codebase with thousands of files. But the principle is the same. You’re not losing progress by starting a new session. You’re carrying forward everything you learned and leaving behind everything that’s getting in the way.

When in doubt, start over.