I recently did some vibe coding to come up with this demo, which may be useful for brain training if you happen to have focus problems. Using the latest Claude for this worked great. I did the whole thing without writing any code myself and only a bit of inspecting the code itself. So on the whole vibe coding works great, especially for someone like me who knows how to code but would rather not learn the vagaries of front end development. But it’s nowhere near the level of simply asking the AI to write something and have it come out. In fact being a programmer helps massively, and may be an absolute requirement for certain tasks.
Vibe coding definitely changes the, uh, vibe of coding. Traditional programming feels like a cold uncaring computer calling you an idiot a thousand times a day. Of course the traditional environment isn’t capable of calling you an idiot so it’s really you calling yourself an idiot, but it’s unpleasant anyway. With vibe coding you’re calling the AI an idiot a thousand times per day, and it’s groveling in response every time, which is a lot more fun.
I’d describe Claude as in the next to the bottom tier of programmer candidates I’ve ever interviewed. The absolute bottom tier are people who literally don’t know how to code, but above them are people who have somehow bumbled their way through a CS degree despite not understanding anything. It’s amazingly familiar with and fluent in code, and in fact far faster and enthusiastic than any human ever possibly could be, but everything vaguely algorithmic it hacks together in the absolute dumbest way imaginable (unless it’s verbatim copying someone else’s homework, which happens a lot more often than you’d expect). You can correct this, or better yet head it off at the pass, by describing in painstaking detail each of the steps involved. Since you’re describing it in english instead of code and it’s good at english this is still a lot less effort and a huge time saver. Sometimes it just can’t process what you’re telling it is causing a problem so it assumes your explanation is correct and plays along, happily pretending to understand what’s happening. Whatever, I’d flunk it from a job interview but it isn’t getting paid and is super fast so I’ll put up with it. On some level it’s mostly translating from english into code, and that’s a big productivity boost right there.
Often it writes bugs. It’s remarkably good at avoiding typos, but extremely prone to logical errors. The most common sort of bug is that it doesn’t do what you asked it to, or at least what it did has no apparent effect. You can then tell it that it didn’t do the thing and ask it to try again which usually works. Sometimes it makes things which just seem janky and weird, at which point it’s best to suggest that it’s probably accumulated some coding cruft and ask it to clean up and refactor the code, in particular removing unnecessary code and consolidating redundant code. Usually after that it will succeed if you ask it to try again. If you skim the code and notice something off you can ask it ‘WFT is that?’ and it will usually admit something is wrong and fix it, but you get better results by being more polite. I specifically said ‘Why is there a call to setTimeout?’ and it fixed a problem in response. It would be helpful if you could see line numbers in the code view for Claude, but maybe the AI doesn’t understand those as reference points yet.
If it still has problems debugging then you can break down the series of logical steps of what should be happening, explain them in detail, and ask it to check them individually to identify which of them is breaking down. This is a lot harder than it sounds. I do this even when pair programming with experienced human programmers as well, which is an activity they often find humiliating. But asking the AI to articulate the steps itself works okay.
Here’s my script for prompts to use while vibe coding debugging, broken down into cut and pasteable commands:
I’m doing X, I should be seeing Y but I’m seeing Z, can you fix it? (More detail is better. Being a programmer helps with elucidating this but isn’t necessary.)
That didn’t fix the problem, can you try again?
Now I’m seeing X, can you fix it?
You seem to be having some trouble here. Maybe the code has accumulated some cruft with all these edits we’re doing. Can you find places where there is unused code, redundant functionality, and other sorts of general cleanups, refactor those, and try again?
You seem to be getting a little lost here. Let’s make a list of the logical steps which this is supposed to go through, what should happen with each of them, then check each of those individually to see where it’s going off the rails. (This works a lot better if you can tell it what those steps are but that’s very difficult for non-programmers.)
Of course since these are so brainless to do Claude will probably start doing them without prompting in the future but for now they’re helpful. Also helpful for humans to follow when they’re coding.
On something larger and more technical it would be a good idea to have automated tests, which can of course be written by the AI as well. When I’m coding I generally make a list of what the tests should do in english, then implement the tests, then run and debug them. Those are sufficiently different brain states that I find it’s helpful to do them in separate phases. (I also often write reams of code before starting the testing process, or even checking if they’ll parse, a practice which sometimes drives my coworkers insane.)
A script for testing goes something like this:
Now that we’ve written our code we should write some automated tests. Can you suggest some tests which exercise the basic straight through functionality of this code?
Those are good suggestions. Can you implement and run them?
Now that we’ve tested basic functionality we should try edge cases. Can you suggest some tests which more thoroughly exercise all the edge cases in this code?
Those are good suggestions. Can you implement and run them?
Let’s make sure we’re getting everything. Are there any parts of the code which aren’t getting exercised by these tests? Can we write new tests to hit all of that, and if not can some of that code be removed?
Now that we’ve got everything tested are there any refactorings we can do which will make the code simpler, cleaner, and more maintainable?
Those are good ideas, let’s do those and get the tests passing again. Don’t change the tests in the process, leave them exactly unchanged and fix the code.
Of course this is again so brainless that it will probably be programmed into the AI assistants to do exactly this when asked to write tests, but for now it’s helpful. Also helpful as a script for human programmers to follow. A code coverage tool is also helpful for both, but it seems Claude isn’t hooked up to one of those yet.
> I’d describe Claude as in the next to the bottom tier of programmer candidates I’ve ever interviewed.
Sure, when evaluated against programmers building BitTorrent or Chia it may appear near the bottom.
In the general population of programmers it is quite high, maybe top decile?