What is“vibe coding”?

Vibe coding meme

https://www.reddit.com/r/ProgrammerHumor/comments/1jcjrzf/vibecoding

Vibe coding is diving into AI-driven development with a hazy idea and no plan. While it feels productive at first, it often leads to a shaky foundation that makes future development messy and frustrating. Without a clear plan, the AI can seem forgetful, producing inconsistent code and logic. This leads to scope creep, bugs, and an unfinished product that misses the original goal.
Vibe coding a critical vulnerability

@personofswag/X

Instead of building on a vibe, build from a plan. This is the purpose of the Multi-AI Iterative Development (MAID) framework, a systematic method for creating robust, scalable applications with AI. The framework is built on four distinct phases: This principle of using the right tool for the job mirrors how professionals already work. The 2025 Stack Overflow Developer Survey shows that while general-purpose models like ChatGPT are a common starting point, developers strategically use other models for specific tasks. They turn to tools like Gemini for complex reasoning and planning, and models like Claude Sonnet for code implementation.
LLMs Stackoverflow

https://survey.stackoverflow.co/2025/technology#worked-with-vs-want-to-work-with-ai-models-worked-want-prof

This is the core of MAID: using the right tool for the right task to build better software.

Conversational Research

The first phase is a structured dialogue. This isn’t a casual chat; it’s a focused conversation that prevents “vibe coding” by turning a high-level vision into a concrete plan. Use a conversational AI like ChatGPT as a partner to move from the abstract to the concrete.
This process adapts to you. A non-technical founder might spend more time on the Why, while a senior engineer can focus on the How. The goal is always the same: clarity.

Define Your Project through Dialogue

The conversation follows a simple funnel: from the high-level Why, to the functional What, and finally to the technical How.
1

The Why (High-Level Vision)

Focus entirely on the human element. This stage defines the user’s pain point and the ultimate goal, using simple, non-technical language.Key Questions to Explore:
  • Who is this for?
  • Is this an internal tool or a public-facing product?
  • What specific problem does this solve for them?
  • What does success look like from their perspective?
2

The What (Functional Requirements)

Translate the vision into tangible features and behaviors. You’re defining what the system must do without deciding on the technology yet.Key Questions to Explore:
  • What are the core features needed for an MVP?
  • What kind of data will the system handle?
  • Are there critical needs like speed, security, or scale?
  • What are the aesthetic or design guidelines (e.g., “minimal, functional,” “warmer tones”)?
3

The How (Technical Exploration)

Once the “what” is clear, explore the “how.” A non-technical person can ask the AI for suggestions, while a technical person can validate their own ideas.Key Questions to Explore:
  • Based on our needs, what technologies or stacks are a good fit?
  • What are the trade-offs between different frameworks?
  • What would a basic system architecture look like?

Scenario: Building a Scraping Tool

Let’s see this dialogue in action for a user building an internal web scraping tool.
The output of this phase is a dialogue transcript containing clear answers to your project’s “Why,” “What,” and “How.” This transcript is the raw material for the next phase.

Build Context Library

In this phase, you’ll formalize the discoveries from your dialogue into a machine-readable specification that serves as the AI’s single source of truth. This detailed plan prevents AI amnesia and scope creep.
1

Index Key Documentation

First, gather comprehensive documentation for your chosen libraries and frameworks.
For fine-grained control, use gpt-crawler to create a docs folder containing custom knowledge files from documentation websites.
  1. Set up the crawler: git clone the repo and run npm i.
  2. Configure config.ts: Define the target url, a match pattern, and a CSS selector for the main content.
  3. Run the crawler: npm start to generate a JSON file in the docs folder.
Example: Crawling Playwright Docs
import { Config } from "./src/config";

export const defaultConfig: Config = {
    url: "https://playwright.dev/python/docs/intro",
    match: "https://playwright.dev/python/docs/**",
    selector: `[role="main"]`,
    maxPagesToCrawl: 50,
    outputFileName: "docs/playwright-docs.json",
};
2

Draft the Product Requirements Document

Next, translate your dialogue transcript into a detailed Product Requirements Document (PRD). Use a long-context AI like Gemini or a local model as a partner to help structure your decisions into a formal PRD.md file.
The PRD is a living document. It serves as the definitive guide for what to build at any given time.
A strong PRD codifies the answers from your research into three core sections:
3

Designing for AI Observability

A critical architectural decision is how your application generates logs. In the MAID framework, logs are first-class context for AI. Mandate in your PRD that a centralized LoggingService must produce structured (JSON) logs. This service should support at least two AI-friendly modes:
  • single_file: All logs from an execution are streamed into one file for a complete overview.
  • token_batched: Logs are split into smaller files, each under a specific token limit. This is powerful for feeding a focused log batch into an AI for debugging without exceeding its context window.
4

Establish Project Rules

Finally, create a file (CLAUDE.md or PROJECT_RULES.md) that outlines the non-negotiable coding standards. This file prevents the AI from deviating from your plan.
This is a living document that should be updated as new architectural decisions are made.
Here’s a simple example for our web scraping project:
# Project: Internal Web Scraper

## 1. Core Technologies & Architecture
- **Tech Stack**: Python 3.11+, Playwright, Pydantic, Pytest
- **Package Manager**: UV. Managed via `pyproject.toml`.
- **Architecture**: Asynchronous, configuration-driven CLI tool.

## 2. Non-Negotiable Rules
- **Testing**: All new logic must have unit tests.
- **Logging**: All log output must be structured JSON, supporting AI-consumable modes (`single_file`, `token_batched`) as defined in the PRD.
- **Data Validation**: All data must be validated by a Pydantic model. No exceptions.
- **Git**: Commits must follow the Conventional Commits specification.
Once complete, your Context Library should contain:
  • docs/*: Documentation knowledge files.
  • PRD.md: The project’s single source of truth.
  • CLAUDE.md: The project’s coding rules.

Implement Functionality

With a detailed plan, you can now move from planning to implementation. Instead of writing piecemeal prompts, you can provide the entire PRD.md and have the AI generate the application skeleton in a single pass. Provide the context library to a specialized coding AI (like Claude Code) whose sole job is to translate your plan into clean, validated code.
A key feature of Claude Code is its ability to automatically detect and apply rules from a CLAUDE.md file in your project’s root. This means your prompts can be more direct, focusing on the what (from the PRD) while trusting the AI to handle the how (from the rules file) without being explicitly told.

Choose an Implementation Strategy

Depending on your project’s scale, choose one of two primary strategies:
This is the fastest way to get started. Provide the AI with the entire Context Library (PRD.md, CLAUDE.md, and all docs/* files) and instruct it to generate the complete initial codebase. This approach is highly effective for new projects or applications of small-to-medium complexity.

The Test-Driven Feedback Loop

A key strength of this framework is its test-driven feedback loop. By instructing the AI to write tests from the beginning, you create a system where the AI can build, verify, and self-correct. This transforms your role from a line-by-line coder to a strategic reviewer. The cycle is simple but effective: This workflow is a systematic collaboration between you and the AI:
  1. AI Generates and Self-Corrects: Based on the PRD and your prompt, the AI generates the feature code and corresponding tests. It then enters an autonomous loop, running its own tests and fixing simple bugs (like syntax errors or missing imports) until its own quality checks pass.
  2. You Review the Implementation: Once the AI’s automated pass is complete, you step in. Review the code not just for correctness, but against the project’s true requirements: Does it solve the problem defined in the PRD? Is the user experience right? Are there edge cases the AI missed?
  3. Approve or Iterate:
    • If the feature is approved, it’s complete.
    • If it needs changes, provide specific, corrective feedback. This feedback becomes a new, more refined prompt that kicks off the cycle again, ensuring the AI’s next attempt is better than the last.
This loop automates the tedious parts of development, allowing the AI to handle initial drafting and bug-fixing while you focus on high-level architecture and final validation.

Scenario: Using the Holistic Approach

Debug Issues

The final phase treats debugging as a systematic process, not guesswork. Instead of asking an AI, “Why is my code broken?”, you provide a curated package of information to help it diagnose and fix the root cause. Your role in this phase is to curate the right context. The AI doesn’t need to guess what the code should do; it can compare the buggy behavior against the project’s official source of truth—the PRD.md.
1

Isolate the Problem

First, reproduce the bug by running the relevant test or application workflow. Pinpoint the symptom: is it a test failure, a validation error, a browser crash, or something else?
2

Gather Precise Context

Once you know the symptom, gather the three key pieces of context the AI needs for an accurate diagnosis:
  • Evidence: The exact error log or test failure output. Provide the full error message and traceback.
  • Location: The specific code file(s) where the error is occurring. Focus the AI’s attention instead of providing the entire codebase.
  • Source of Truth: The expected behavior. Reference the specific section of the PRD.md or CLAUDE.md that defines how the system should have behaved.
3

Delegate the Fix to the AI

Provide the curated context to an AI with a direct command, not a vague question. The quality of the AI’s proposed solution is directly proportional to the quality of your prompt.Your prompt must contain three elements:
  • Goal: A clear statement (e.g., “Find the root cause and provide the corrected code.”).
  • Context: The evidence and location you just assembled.
  • Source of Truth: A direct reference to your project plan (PRD.md or PROJECT_RULES.md). This forces the AI to solve the problem based on your rules, not its own assumptions.

Scenario: Solving a Pydantic Validation Error

Let’s see this process in action. Our web scraper runs, but no data is being saved for the ‘TechBlog’ target.

🎉 Done!