I Burned Out on Vibe Coding, Came Back, and Rewrote Everything
I hit a wall with vibe coding. Not a dramatic crash. More like the slow realization that I’d been sprinting for months and couldn’t remember why. I had 15 projects in various states of “maybe done,” a GitHub commit chart that looked like a heart monitor, and a growing suspicion that I was building things just to build things.
Fortunately, freelance writing work picked up right around the same time. Enough to actually pay attention to it. So I stepped away from the side projects, wrote about other people’s technology for a change, and let my own code sit untouched for a few months.
When I came back, I had no patience for bullshit. And I looked at my projects differently.
Table of Contents
- Your Vibe-Coded Apps Are Prototypes (And That’s Fine)
- Making “Adding Features” the Feature
- Building Bottom-Up with Verdent and Claude Code
- The 60 Missing APIs
- Making Plans That Any AI Agent Can Execute
- The Same Pattern, Different Project
- What Changed
Your Vibe-Coded Apps Are Prototypes (And That’s Fine)
Here’s the thing I couldn’t see while I was in the thick of it: almost everything I’d built with AI coding tools was a prototype. Not in the dismissive sense. These apps worked. EmberText was a functional Electron writing app. Niche Site Factory could generate and manage content sites. They ran. They did things.
But they were all built top-down. I’d tell the AI “build me an app that does X” and it would scaffold the whole thing, features and all, in one giant session. The problem is that when you build top-down with AI, you end up with something that works but is almost impossible to extend. Every new feature is a negotiation with the existing architecture. You’re not adding to the app. You’re fighting it.
EmberText was the clearest example. I built it with Claude Code over about 16 hours and $80 in API costs. It had AI integration, text generation, character relationship graphs, plot scaffolding. Impressive on paper. But by the time I realized it should have had a plugin architecture, I was already deep enough that refactoring meant essentially starting over.
So that’s what I did.
Making “Adding Features” the Feature
The insight that changed everything was stupid simple: instead of building an app with features, build an app where adding features is the feature.
I’d been using Obsidian for years and it’s in my top 5 favorite software. It’s incredible for notes, planning, and organization. You can even make it distraction-free for writing. It’s just not the default, and “not the default” matters more than you’d think when you’re trying to get into a flow state. I tried to hack around this with my Daily Prompts plugin that launched an alert and opened a daily note in Zen mode. It worked, kind of, but I was still fighting the tool.
VS Code is for code. Obsidian is for notes. What’s for writing?
That question led to Veneer, a complete rewrite of EmberText from scratch. Same idea, a distraction-free writing environment, but built from the ground up as a plugin-first architecture. The “Zen-First Shell” concept: when you open it, you see nothing but a clean sheet and your text. Sidebars, ribbons, status bars exist as ghost elements, hidden by default, appearing only when you hover near the edges or hit a hotkey. Everything that isn’t the writing surface has to earn its right to be on screen.
And critically, every feature is a plugin. The file explorer? Plugin. The markdown editor? Plugin. The command palette? Plugin. Even core functionality ships as plugins that can be swapped, extended, or replaced. This isn’t just for a future community. It makes the whole thing dramatically easier to build with AI, because each plugin is a self-contained unit with clear boundaries. You can hand an AI agent a plugin spec and let it work without worrying about it breaking everything else.
Building Bottom-Up with Verdent and Claude Code
I used Verdent to build the base application. If you read my post about Verdent, you know this thing is fast. Too fast, honestly. It finished most of the base app, including a file browser sidebar plugin, a markdown editor plugin, and a command palette, in about 220 tokens, roughly $20 worth of credits. There were bugs left when I ran out of tokens, but the foundation was solid.
But here’s where the process got interesting. Instead of just continuing to add features on top, I switched to Claude Code and did something I hadn’t done before: I asked it to audit the codebase.
use your skills and check the repo for best practices
- UI
- Is it themable like Obsidian or VS Code
- Plugin Architecture (and compare to VS Code and Obsidian)
- TypeScript
- Electron
- Structure, Naming Conventions
I’d forgotten how many skills and plugins I had installed in Claude Code. When I ran this, it deployed four specialized agents in parallel:
- Explore subagent analyzed the overall project structure, UI patterns, theming, and naming conventions
- Architecture Strategist evaluated system design decisions and compared the plugin architecture against VS Code and Obsidian
- Kieran TypeScript Reviewer checked strict mode compliance, type safety, interface definitions, and generic patterns
- Best Practices Researcher gathered industry standards and found examples from successful projects
This is not how I was working six months ago. Six months ago, I would have just told the AI to add the next feature and hoped for the best.
The 60 Missing APIs
The audit turned up a lot. Claude gave the codebase an A- (92/100) overall, which sounds great until you read the details. The critical finding was the plugin API gaps. Obsidian provides 60+ plugin APIs. Veneer was missing most of them.
No modals. No notification system. No context menus anywhere. No way for plugins to subscribe to file or workspace events. No way to extend the CodeMirror editor. The native OS menu had “Open Folder” under “Veneer” instead of “File,” which is the kind of thing that makes you realize the AI built the structure but didn’t think about the conventions.
I had Claude store all the findings in the project’s docs folder:
- BEST_PRACTICES_REVIEW.md: Everything organized by priority with an implementation roadmap
- PLUGIN_API_GAPS.md: A detailed comparison against Obsidian and VS Code showing exactly what was missing
Making Plans That Any AI Agent Can Execute
This is the part that parallels Mitchell Hashimoto's AI adoption journey. He talks about “harness engineering,” the idea that every time an agent makes a mistake, you engineer a solution so it never makes that mistake again. Better implicit prompting. Actual programmed tools. The goal is building up an ecosystem where agents get better over time.
I’m doing something similar, but at the project planning level. Instead of just fixing bugs as they come, I’m creating structured documentation that any AI tool can pick up and execute. My next prompt to Claude was:
Take docs/BEST_PRACTICES_REVIEW.md and docs/PLUGIN_API_GAPS.md and create a
markdown list of TODOs in the docs folder. These should be grouped into tasks
and subtasks. If it is possible to work on some tasks concurrently this should
be mentioned. This file should be able to be used by an AI agent to finish
these tasks. Add enough details to each task to speed up development time.
Now I have the work planned in 3 phases across 3 TODO files, plus a final phase listing every Obsidian plugin API that Veneer doesn’t have yet for future development. These are all in the docs folder of the project, version controlled, and written so that any agent, Claude Code, Jules, a VS Code extension with Qwen, whatever, can pick them up and start working.
This is the difference between vibe coding and what I’m doing now. I’m still using AI to do the heavy lifting. But I’m not just throwing prompts at the wall. I’m using one AI tool to build, another to audit, and then creating structured plans that decouple the what needs to happen from the which tool does it.
The Same Pattern, Different Project
This isn’t just how I rebuilt Veneer. I’m doing the same thing with Niche Site Factory. Instead of telling an AI to “build me a niche site generator” (which is roughly what I did the first time), I started over by building the data model first.
I took a real project, a sci-fi encyclopedia wiki, and used it to design the content structures. A knowledge graph in PostgreSQL with pgvector for embeddings. 2,622 books ingested into the entities table. Flexible JSONB storage that can handle books, concepts, authors, movies, whatever. The data model came first, the application came second.
It’s the same bottom-up principle. Don’t build the house and then figure out the foundation. Build the foundation, verify it’s solid, then build up from there.
What Changed
I think the burnout was actually useful. Stepping away let me see the pattern I was stuck in: build fast, hit a wall, start something new. That’s fine when you’re learning the tools. It’s how I figured out what Claude Code, Kiro, Verdent, and Jules are each good at. But at some point, you have to stop prototyping and start building.
Here’s what’s different now:
- Bottom-up, not top-down. Start with the architecture and data model, not the features. Let the AI build on a solid foundation instead of improvising one.
- Audit before extending. Use AI review tools to find the gaps before you pile on more code. It’s cheaper to fix the structure now than refactor later.
- Plans as portable artifacts. Write TODO files detailed enough that any AI agent can execute them. Don’t marry yourself to one tool.
- Plugins as a development strategy. A plugin architecture isn’t just for the community. It makes AI-assisted development dramatically easier because each unit is self-contained.
- Past work is research. EmberText wasn’t a failure. It was a $80 prototype that taught me exactly what Veneer needed to be.
I’m still vibe coding. I’m just vibing with more structure now and calling it AI-assisted development. And honestly, after a few months of writing for clients and not touching my own projects, coming back to this with fresh eyes and no patience might be the best thing that happened to any of them.
