Skip to main content

What is“vibe coding”?

Vibe coding is diving into AI-driven development with a hazy idea and no plan. While it feels productive at first, it often leads to a shaky foundation that makes future development messy and frustrating. Without a clear plan, the AI can seem forgetful, producing inconsistent code and logic. This leads to scope creep, bugs, and an unfinished product that misses the original goal. Instead of building on a vibe, build from a plan. This is the purpose of the Multi-AI Iterative Development (MAID) framework, a systematic method for creating robust, scalable applications with AI. The framework is built on four distinct phases: This principle of using the right tool for the job mirrors how professionals already work. The 2025 Stack Overflow Developer Survey shows that developers strategically use different AI models for specific tasks, from general-purpose ideation with ChatGPT to complex reasoning with Gemini and code implementation with Claude Sonnet. MAID formalizes this practice, providing a structured workflow to build better software with AI.

Conversational Research

The goal of this phase is to transform a vague idea into a concrete project vision by using a general-purpose AI as a structured brainstorming partner. This initial dialogue prevents “vibe coding” by establishing a clear “Why,” “What,” and “How” before any other work begins. The conversation follows a simple funnel, moving from high-level vision to specific technical exploration. This structured dialogue ensures all critical questions are answered upfront.
This process adapts to you. A non-technical founder might spend more time on the Why, while a senior engineer can focus on the How. The goal is always the same: clarity.
  • Dialogue Framework
  • Example
1

Why

This stage defines the user’s pain points and the ultimate goal, using simple, non-technical language. Key Questions: Who is this for? What problem does it solve? What does success look like?
2

What

Translate the vision into tangible features and behaviors without deciding on technology. Key Questions: What are the core features for an MVP? What kind of data will it handle? Are there critical needs like speed, security, or scale?
3

How

A non-technical person can ask the AI for suggestions, while a technical person can validate their own ideas. Key Questions: What technologies are a good fit? What are the trade-offs? What would a basic system architecture look like?

Build Context Library

The objective is to formalize the discoveries from your dialogue into a machine-readable Context Library that serves as the AI’s single source of truth. This detailed plan prevents AI amnesia and scope creep by giving the model a definitive reference for the entire project. This involves creating three key artifacts: indexed documentation, a Product Requirements Document (PRD), and a set of project rules.
1

Index Key Documentation

First, gather comprehensive documentation for your chosen libraries and frameworks so the AI can reference official information.
  • Deep Control
  • Quick Start
For fine-grained control, use gpt-crawler to create custom knowledge files from documentation websites.
Example
import { Config } from "./src/config";

export const defaultConfig: Config = {
    url: "https://playwright.dev/python/docs/intro",
    match: "https://playwright.dev/python/docs/**",
    selector: `[role="main"]`,
    maxPagesToCrawl: 50,
    outputFileName: "docs/playwright-docs.json",
};
2

Draft the Product Requirements Document (PRD)

Next, translate your dialogue transcript into a detailed PRD.md file using a long-context AI like Gemini as a structuring partner. The PRD codifies your decisions and serves as the project blueprint.
The PRD is a living document. It serves as the definitive guide for what to build at any given time.
3

Designing for AI Observability

A critical architectural decision is how your application generates logs.In the MAID framework, logs are first-class context for AI. Mandate in your PRD that a centralized LoggingService must produce structured (JSON) logs.This service should support at least two AI-friendly modes:
  • single_file: All logs from an execution are streamed into one file for a complete overview.
  • token_batched: Logs are split into smaller files, each under a specific token limit. This is powerful for feeding a focused log batch into an AI for debugging without exceeding its context window.
4

Establish Project Rules

Finally, create a file (CLAUDE.md or AGENTS.md) that outlines the non-negotiable coding standards. This file prevents the AI from deviating from your plan.
# Project: Internal Web Scraper

## 1. Core Technologies & Architecture
- **Tech Stack**: Python 3.11+, Playwright, Pydantic, Pytest
- **Package Manager**: UV. Managed via `pyproject.toml`.
- **Architecture**: Asynchronous, configuration-driven CLI tool.

## 2. Non-Negotiable Rules
- **Testing**: All new logic must have unit tests.
- **Logging**: All log output must be structured JSON, supporting AI-consumable modes (`single_file`, `token_batched`) as defined in the PRD.
- **Data Validation**: All data must be validated by a Pydantic model. No exceptions.
- **Git**: Commits must follow the Conventional Commits specification.
You now have a complete Context Library, the AI’s single source of truth, containing:
  • docs/*: Indexed documentation files.
  • PRD.md: The project’s official blueprint.
  • CLAUDE.md: The project’s non-negotiable coding rules.

Implement Functionality

With a comprehensive plan in place, the goal is to use a specialized coding AI to translate the Context Library into clean, high-quality, and test-covered code. Your role shifts from writing code line-by-line to orchestrating the AI and validating its output against the PRD. This phase centers on a test-driven feedback loop where the AI builds, verifies, and self-corrects.
A key feature of models like Claude Code is its ability to automatically detect and apply rules from a CLAUDE.md file in your project’s root. This means your prompts can be more direct, focusing on the what (from the PRD) while trusting the AI to handle the how (from the rules file).
The cycle is simple but effective:
  1. AI Generates and Self-Corrects: Based on the PRD, the AI generates feature code and corresponding tests, then autonomously fixes simple bugs until its own checks pass.
  2. You Review the Implementation: You validate the AI-generated code against the PRD’s requirements, user experience goals, and potential edge cases.
  3. Approve or Iterate: If approved, the feature is complete. If not, you provide specific, corrective feedback, which becomes a more refined prompt for the next cycle.
User: “Using the attached PRD.md as your guide, generate the entire initial project structure and the complete code for all foundational services: ConfigService, StorageService, and LoggingService. Also, generate the corresponding pytest unit tests for each service.”
AI: “Understood. I have loaded your PRD.md and will apply the architectural rules and standards from CLAUDE.md. I will now generate the initial project structure, all specified foundational services, and their corresponding unit tests.”
“Here is the directory structure based on Section 2.1 of the PRD:”
.
├── config/
   └── targets.yaml
├── src/
   ├── services/
   ├── __init__.py
   ├── config_service.py
   ├── logging_service.py
   └── storage_service.py
   ├── models/
   └── scrapers/
├── tests/
   └── test_services.py
└── run.py
“And here is the code for src/services/storage_service.py, adhering to the PRD:”
src/services/storage_service.py
import asyncio
import json
import logging
from pathlib import Path
from typing import Dict, Any
import aiofiles
from pydantic import BaseModel

logger = logging.getLogger(__name__)

class StorageService:
    """Provides async, thread-safe methods for storing validated data and error logs."""
    def __init__(self, output_dir: Path = Path("output")):
        # ... implementation ...

    async def save_validated_data(self, model_instance: BaseModel, output_file: str) -> None:
        # ... implementation ...

    async def save_error_html(self, html_content: str, failed_url: str) -> None:
        # ... implementation ...
The result of this iterative cycle is a fully implemented feature, complete with passing tests, that directly adheres to the specifications laid out in the PRD.

Debug Issues

The final phase treats debugging not as guesswork, but as a systematic process of providing curated context to an AI to diagnose and fix the root cause of an issue. Instead of asking, “Why is my code broken?”, you provide a precise package of information.
1

Isolate the Problem

First, reproduce the bug by running the relevant test or application workflow. Pinpoint the exact symptom (e.g., test failure, validation error, crash).
2

Gather Precise Context

Assemble the three key pieces of information the AI needs for an accurate diagnosis:
  • Evidence: The exact error log or test failure output with the full traceback.
  • Location: The specific code file(s) where the error is occurring.
  • Source of Truth: The section of the PRD.md that defines how the system should have behaved.
3

Delegate the Fix to the AI

Provide the curated context to an AI with a direct command. Your prompt should clearly state the goal, provide the context, and reference the source of truth.
The user isolates a bug where techblog_posts.jsonl is empty and finds a ValidationError in the logs. They then craft a precise prompt for the AI.User Prompt:Goal: Find the root cause of this validation error and provide the corrected code.Evidence (from logs/app.log):
{
    "level": "ERROR",
    "message": "Pydantic validation failed for TechBlog",
    "error": "1 validation error for BlogPost\npublication_date\n  Input should be a valid datetime..."
}
Location: src/models/blog.py (model definition) and src/scrapers/blog_scraper.py (data extraction).Source of Truth: Section 3.2 of PRD.md requires the publication_date field to be a datetime object.”
AI: “Understood. The error log confirms the publication_date field is receiving a string that cannot be parsed directly. The blog_scraper.py is likely extracting a human-readable date format (e.g., “August 14, 2025”) that Pydantic doesn’t recognize by default.Solution: I will add a custom validator to the BlogPost model in src/models/blog.py to parse this specific date format before validation occurs. This centralizes the validation logic within the model, adhering to our schema-first principle.”
AI: “Here is the updated src/models/blog.py file with the fix:”
src/models/blog.py
from pydantic import BaseModel, HttpUrl, field_validator
from datetime import datetime

class BlogPost(BaseModel):
    title: str
    author: str
    publication_date: datetime
    url: HttpUrl
    content_length: int

    @field_validator('publication_date', mode='before')
    @classmethod
    def parse_publication_date(cls, value: str) -> datetime:
        """Parses a string like 'August 14, 2025' into a datetime object."""
        if isinstance(value, str):
            try:
                return datetime.strptime(value, '%B %d, %Y')
            except ValueError:
                raise ValueError(f"Invalid date format: {value}")
        return value
Explanation: The @field_validator intercepts the incoming string. It uses datetime.strptime to parse the specific format into a valid datetime object before Pydantic performs its own validation. This resolves the error while keeping the data model robust.”
The outcome is a precise, context-aware fix that addresses the root cause of the bug, along with a deeper understanding of why the failure occurred, reinforcing the project’s overall quality.

🎉 Done!