Written by Christopher Kukla, Director of AI Technology
Recently, I found myself with an entirely new responsibility that, just a few years ago, I would have never considered: reviewing applications built by product managers and solution architects using what we call “vibe coding.”
As AI has evolved, we’ve empowered non-developers on our team to build functional applications using natural language and intuitive AI tools. What started as an interesting experiment has become a regular part of my workflow, and it’s taught me a lot about both the promise and the pitfalls of letting just anyone build software
What Is Vibe Coding
“Vibe coding” is the practice of creating applications using natural language prompts and AI-powered development tools. The term was coined by Andrej Karpathy, co-founder of OpenAI. He introduced the term in a post on X (formerly Twitter) in February 2025, describing it as a way of programming using large language models where you “fully give in to the vibes, embrace exponentials, and forget that code even exists.”
The name captures something real about how accessible and natural the process feels to non-developers. But here’s what I’ve discovered: while the term caught on because it sounds casual and approachable, that’s exactly the problem. The “vibe” framing makes it seem effortless and intuitive, but there’s nothing casual about what is happening under the hood, and as fast as AI is moving, we are not ready to forget that code even exists... at least not yet.
While “vibe coding” has caught on as the popular term, I think we need a name that better reflects what’s actually happening here. What we’re really seeing is idea-driven development, or what I like to call ID coding for short. The ability to go directly from concept to working application using natural language and AI tools. This term better captures both the starting point (ideas) and the intentional development process, rather than suggesting it’s all just about following good vibes.
The Genuine Benefits
Before I dive into the challenges, and there are definitely challenges, let me be clear about why ID coding is genuinely something to be excited about.
The biggest advantage is empowerment. Product managers and solution architects who have never had hands-on coding experience can now build functional prototypes and show them to clients and stakeholders. Instead of describing an idea in abstract terms or waiting for developer availability, they can create something tangible that demonstrates real functionality.
This will ultimately transform our client and stakeholder conversations. When a product manager can develop and show a working prototype instead of just talking through wireframes, it changes everything. Stakeholders can click buttons, see real data flow, and provide feedback on actual functionality rather than theoretical concepts.
ID coding also fosters genuine collaboration between technical and non-technical teams. When product managers can build and iterate quickly, the conversation shifts from “Can you build this?” to “I built this; what do you think?” That’s a fundamentally different dynamic, and it leads to better communication and more creative solutions.
The Hidden Complexity: What I See When Apps Hit My Desk
Here’s where the rubber meets the road. As one of the only developers on our AI team capable of reviewing these applications, I’ve become intimately familiar with what ID-coded applications look like when developed by non-developer resources.
Here are four of the most common issues I notice when reviewing these ID-coded applications.
1. The Testing Problem
Non-developers don’t instinctively think about edge cases the way developers have been trained to. When they ask for an AI Agent to build a feature, they’re focused on the happy path: does it work when everything goes right? But what happens when a user enters unexpected data, or when a service is temporarily unavailable, or when someone clicks the submit button twice?
Most ID-coded applications I review have minimal testing, if any. The AI tools can generate unit tests, but they don’t do it automatically, and the tests they create often miss the scenarios that matter the most in production.
2. Security: The Trust Problem
Other aspects non-developers might not think about are things like input validation, authentication, or data exposure. I regularly find ID-coded applications that store sensitive data in plain text or accept user input without sanitization, creating serious security vulnerabilities that require immediate attention during review.
3. Architecture: The Copy-Paste Problem
Here’s a pattern I see constantly: when you ask an ID coding tool to submit data to one system and then later ask it to submit similar data to another system, it doesn’t recognize that these are fundamentally the same operation. Instead of reusing code or creating a shared service, it builds entirely separate implementations.
This means when you need to fix a bug or update the logic, you’re not fixing it in one place; you’re hunting down every duplicate implementation and fixing the same issue multiple times. Even worse, it’s easy to miss one of the duplicates, creating inconsistent behavior across your application.
I recently reviewed an application where the same data transformation logic was written four different ways across different features. Not similar, identical logic, implemented from scratch each time. This creates technical debt that compounds quickly and makes future updates exponentially more complex. What should be a five-minute bug fix becomes a day-long archaeological expedition to find and update every variation.
When stakeholders request a feature change that affects this duplicated logic, you’re looking at 4x the development time, 4x the testing effort, and 4x the risk of introducing new bugs. That “quick prototype” suddenly becomes an expensive maintenance nightmare that burns through developer hours and delays other critical work.
4. Context: The Forgetting Problem
This highlights a core constraint that LLM-based tools can’t escape: context windows. While we as humans draw from everything we’ve learned and experienced, AI tools hit a hard wall; they can only actively consider a fixed amount of information at any given time. When that window fills up with new conversation, older information gets pushed out completely. The LLM doesn’t just lose access to earlier decisions; it loses the ability to maintain consistency, avoid contradictions, or build on established patterns.
I have seen applications where the first few features are elegant and consistent, but by the fourth or fifth feature, the tool has essentially forgotten its own patterns and starts implementing things differently. This results in messy applications that feel like they were built by different developers who never talked to each other.
The Three-Day “Quick Deployment”
Let me share a specific example that illustrates these challenges perfectly. I was asked to review an application and, once reviewed, deploy it to our cloud provider. What should have been a straightforward few hours of work turned out to be a developer’s worst nightmare. On the surface, it was impressive: all the required functionality was there, it demonstrated the concept beautifully, and stakeholders were excited about it.
But when I looked under the hood, I found:
• Entire pages that looked identical but were developed separately, line by line
• Functions that were never called anywhere in the application
• Database queries that were duplicated across different services
• Inconsistent error handling that would fail silently in production
That “quick deployment” took three days of cleanup work. Was it still faster than building from scratch? Absolutely. But it was unexpected, and it derailed other work we had planned.
Making ID Coding Work: Lessons from the Trenches
Despite these challenges, I’m convinced that idea-driven development is incredibly powerful when done thoughtfully. Here are four lessons I’ve learned so far about making it more effective:
1. Start With a Plan: Work Collaboratively, Not Imperatively
The most successful ID-coded projects begin with a conversation with a developer. Not to discourage the idea, but to help define the directions to be provided to the AI, ensuring the application is built in the most maintainable way possible. Five minutes of planning can save hours of cleanup.
This collaborative approach reflects what our Chief AI Officer, Jon Evans, calls Thought Architecture™. A foundational shift in how we think, create, and collaborate with AI itself. Instead of treating AI as a vending machine where you insert requirements and expect perfect output, thought architecture means being intentional about how you frame problems and structure your thinking when working with AI tools.
Once you’ve outlined your application, ask the AI for a development plan. Then, and this is crucial, ask the agent if it has any questions about that plan. Answer those questions thoroughly, then ask if there are more questions. Keep this dialogue going until you’re both aligned on what needs to be built and how. This isn’t just planning; it’s planning together. You need to make sure the AI truly understands what you want before any code gets written.
Treat the AI like a collaborative partner, not a magic wand. The time you invest in this upfront communication pays massive dividends in the quality and consistency of the final application.
Use Agent Rules and Instructions
Tools like Cursor and GitHub Copilot allow you to define coding standards and architectural patterns upfront. If your team has preferred ways of handling authentication, database connections, or error handling, document these as agent rules. Defining these rules for the AI agent helps keep it focused on its goals. Helps keep the context focused on what quality work means. It is important to note that sometimes the AI agent needs to be reminded of what the rules are, so be sure to bring them up regularly.
2. Build Incrementally
Break complex projects into smaller pieces and review each piece as you go. This is especially important because of the context limitations I mentioned earlier. If you know your application needs to handle 50 different features, focus on developing those features one at a time rather than trying to build everything at once.
This prevents context loss and makes it easier to maintain consistency across features. It also makes debugging much simpler when something goes wrong.
3. Budget for Review Time
Even polished-looking ID-coded applications need developer review before production. Budget for this time upfront rather than being surprised by it later. It’s still a massive productivity gain, but it’s not magic.
The Future of Development
ID coding represents a fundamental shift in who can build software and how quickly ideas can become reality. But like any powerful tool, it requires understanding and respect for its limitations.
The goal isn’t to eliminate the need for developers; it’s to change what developers focus on. Instead of spending time on boilerplate code and basic implementations, we can focus on architecture, performance, security, and the complex problems that still require deep technical expertise.
As for the product managers and solution architects who are now building applications, they’re not trying to replace developers. They’re trying to move faster, communicate more effectively, and turn ideas into reality without waiting for developer availability. That’s not a threat to development; it’s an evolution of how we build software.
The key is approaching idea-driven development with the same rigor we apply to traditional development: planning, testing, reviewing, and maintaining. When we do that, we get the best of both worlds... the speed and accessibility of AI-powered development with the reliability and sustainability that production applications require.
And maybe, just maybe, we can keep the cleanup work to less than three days.
Get more information on AI implementation by watching Impact's webinar, Lessons Learned from Incorporating AI into Business Processes.