Vibe coding is having a moment. And honestly? It deserves it.
The ability to describe an idea in plain language, feed it to an AI coding tool, and watch a working app appear on your screen is genuinely remarkable. It's democratizing software creation, unlocking new categories of builders, and compressing weeks of work into hours. For teams at Bold and across the industry, it's changing the way we think about what's possible.
But here at Bold, we believe the most exciting new tools deserve honest scrutiny, especially when real clients, real users, and real stakes are involved. So we sat down with our engineer, James, to get his unfiltered take on what vibe coding gets right and what it quietly gets wrong.
The Car Show Problem
James opened with a scene we all recognize:
"You've probably been to a car show and fallen in love with a concept car. It's shiny, stylish, attractive, and feels like The Future. At the car show, you don't think about whether it will rust out in three years or if the suspension is awful, but wow, does that machine look good. Vibe coding gives you that same feeling."
That's not a criticism. It's a caution. The feeling of progress is real. The app looks good. It works. You're 95% of the way there. But as he puts it: “Are you?”
The gap between "it looks done" and "it is done" is exactly where things get expensive.
The Guessing Machine
To understand why that gap exists, it helps to understand how large language models actually work.
"LLMs are very good 'guessing machines'. The handy part is that if you're not specific in what you need, they'll just make up plausible options and use them to fill in the gaps, often getting it right. The bad part is they're sometimes wrong when you aren't being absolutely specific. Somewhere in between the business requirements and the engineering result, there will be gaps. That's the nature of the beast."
This isn't a flaw to be fixed in the next model release. It's a fundamental characteristic. As James puts it plainly: "AI responses are probabilistic, not deterministic, and software engineering is a deterministic realm." When you vibe code an app, hundreds of engineering and design decisions are being made for you, under the hood, by a model doing its best to guess what you meant. Most of the time, it guesses right. Sometimes, it doesn't.
The question is: how would you know?
The Encyclopedia Index
One of the most illuminating examples James shared was about database indexing, a technical detail most non-engineers would never think to ask about.
"If you had a set of encyclopedias on a shelf and wanted to learn about Giraffes, you'd pick the volume that had 'G'. That's an index, and it's one of the most important components in information architecture. When you vibe code something, and need to find 'Giraffe', you can end up with a piece of code that opens the first page of the A volume, then starts reading the entire encyclopedia set, until it finds Giraffe. Bad indexing can make your app thousands of times slower than it could be, and it happens."
This is the kind of problem that's invisible at launch. With 50 users, everything feels fast. With 5,000, your app slows to an unusable crawl. Vibe coding won't warn you. Scalability has to be designed in, and that requires someone who knows what they're looking for.
The Single Source Problem
Scaling isn't the only place where invisible decisions come back to bite you. James described a common scenario in product development:
"Let's say you're making an e-commerce app and decided that you wanted to rebrand it for another client. You've already got the code, and you've tried to get your coding agent to change it, but it turns into an inconsistent mess. Under the hood, the original code didn't create any foundational stylesheets. It did everything at the component level, so there's no 'single source' of fonts, sizes, or colours. Things that could have been changed in one place are now in 200 places."
This is a classic architectural failure, not because the AI did something wrong, but because it made a reasonable default choice when a better one wasn't specified. Good engineering practice says you anticipate change. Vibe coding, by nature, doesn't.
The Gaps That Can Sink You
Beyond performance and maintainability, there are a few categories where the stakes are even higher.
Security. As James puts it: "Exposing an API to unauthorized users could sink your reputation." Vibe-coded apps can and do expose endpoints without proper authentication, not maliciously, just because no one told the model to lock them down.
Accessibility. "How accessibility testing works is beyond even experienced developers. You need an accessibility QA expert. That's not something an LLM can fill in for you." Accessibility compliance isn't optional for many businesses. It's a legal and ethical requirement, and it requires human expertise that no prompt can replace.
User Experience. "An LLM isn't going to tell you that the flow you made in your app is awful to use and could be better if you just did this one thing. You need a UX expert for that." A tool that generates what you describe can't tell you that you described the wrong thing.
88 Miles Per Hour
James closed with a reference that felt exactly right:
"You vibed hundreds of hidden design decisions to generate that sleek stainless steel DMC DeLorean, but if it can't get to 88 miles per hour, it's not taking you into the future."
Vibe coding is a powerful accelerant. It can take you from idea to working prototype faster than anything we've seen before. But the future it's taking you toward still has to work, really work, when real users, real traffic, and real requirements show up.
That's not an argument against vibe coding. It's an argument for pairing it with the right expertise. For knowing what questions to ask, what to audit, and where the hidden 5% is hiding.
The vibes are good. Just make sure the engineering is too.
----------------------
Have thoughts on vibe coding, AI-assisted development, or what responsible adoption looks like in practice? We'd love to hear from you.


