While building Vibe Browser, I prototyped a self-organizing development team composed entirely of AI agents. Hereβs how it works:
Coding Agents (e.g. Claude Code, Codex, Gemini CLI) generate and test code around the clock.
Analysis Agents ingest logs, UI screenshots, and test results, then flag broken or suboptimal areas.
AI Engineering Manager (powered by a premium LLM like o3, Gemini 2.5-Pro, or Claude 4-Opus) reflects on the coding agentsβ output, prioritizes fixes and features, and issues new tasksβensuring we never ship βit compilesβ as βitβs done.β
Iteration Loop continues autonomously, with the manager refining requirements and the coding agents executing them until the feature is production-ready.
This architecture lets us spin up a full βdev teamβ in minutes, with the quality-control layer of a senior engineering manager baked in. Itβs already speeding up Vibe Browserβs feature cycleβand could redefine how small teams (or even solo founders) scale every aspect of software development.
A CLI-based multi-agent coding tool that uses AI to automate software development tasks through collaborative AI agents.
- AI-Powered Team: Engineering Manager orchestrates specialized coding agents
- Task Automation: From simple scripts to full applications
- Real-Time Collaboration: Interactive CLI with progress tracking
- Smart Testing: Comprehensive test suite with real LLM integration
- Extensible: Plugin architecture for adding new agents and tools
pip install git+https://github.com/VibeTechnologies/VibeTeam.git-
Clone the repository:
git clone https://github.com/VibeTechnologies/VibeTeam.git cd VibeTeam -
Install in development mode:
pip install -e . # or for development with extra dependencies: pip install -e .[dev]
-
Set API keys (required for AI functionality):
# For Claude Code functionality (primary) export ANTHROPIC_API_KEY="your-anthropic-key" # For OpenAI reflection and analysis (optional) export OPENAI_API_KEY="your-openai-key" export OPENAI_BASE_URL="https://api.openai.com/v1" # optional, for custom endpoints
-
Run the automated task system:
vibeteam-task
-
Use with reflection (enhanced quality):
vibeteam-task --enable-reflection --debug
-
Run the MCP server (for ChatGPT/Claude integration):
# Default: Tunnel mode for public access vibeteam-mcp # Standard mode (stdio protocol) vibeteam-mcp --no-tunnel
After installation, you'll have access to these commands:
vibeteam-task- Automated task completion from tasks.md with retry support and optional OpenAI reflectionvibeteam-cli- Interactive multi-agent coding interfacevibeteam-mcp- Model Context Protocol (MCP) server for ChatGPT/Claude integration
# Basic task automation
vibeteam-task
# Task automation with OpenAI reflection (enhanced quality)
vibeteam-task --enable-reflection --debug
# Task automation with retry support (handles Claude usage limits)
vibeteam-task --retry
# Combined: retry + reflection for maximum reliability
vibeteam-task --retry --enable-reflection
# Custom directory and tasks file
vibeteam-task --dir /path/to/project --tasks-file my-tasks.md
# MCP server
vibeteam-mcp # Default: Cloudflare tunnel for public access
vibeteam-mcp --no-tunnel # Standard MCP protocol (stdio)
vibeteam-mcp --port 9000 # Custom port with tunnelThe vibeteam-task command automatically reads tasks from tasks.md and completes them using Claude Code agents.
-
Create a tasks.md file with checkbox-style tasks:
[ ] Write python hello world hello.py. [ ] Simple html page hello world hello.html. [ ] Create a REST API endpoint for user registration.
-
Run the automated task completion:
vibeteam-task
Or specify a different directory:
vibeteam-task --dir /path/to/your/project
The system will:
- Read uncompleted tasks from
tasks.md - Use Claude Code agent to complete each task
- Create files, write tests, run tests, and fix issues
- Mark tasks as completed in
tasks.md - Commit changes to git
- Automatically retry on Claude usage limits and API failures
- Optionally use OpenAI for reflection and quality analysis
- β
Automatic Task Detection: Reads
[ ]unchecked tasks fromtasks.md - β Full Development Cycle: Creates code, tests, runs tests, fixes issues
- β Git Integration: Reviews changes and commits completed work
- β Smart Retry System: Automatically retries on Claude usage limits and transient failures
- β
Progress Tracking: Updates
tasks.mdwith completed tasks[x] - β OpenAI Reflection: Optional quality analysis and improvement suggestions
- β MCP Server: Standard Model Context Protocol for ChatGPT/Claude integration
- β Deployment Ready: Docker, Cloudflare Tunnel, Kubernetes support
VibeTeam includes a full MCP server implementation that exposes AI coding capabilities to ChatGPT, Claude, and other MCP clients.
execute_task- Execute coding tasks with Claude Code Agentreview_code- Review code for quality and improvementsgenerate_code- Generate code from specificationsfix_code- Fix bugs and issues in codewrite_tests- Create unit tests for codecomplete_tasks- Complete tasks from tasks.mdmanage_project- Use Engineering Manager for coordination
-
Public Access via Cloudflare (Default):
vibeteam-mcp # Default: Automatically starts tunnel -
Local Development:
vibeteam-mcp --no-tunnel # Standard MCP protocol (stdio) -
Docker Deployment:
docker build -t vibeteam . docker run -p 8080:8080 vibeteam
By default, vibeteam-mcp automatically:
- Starts an HTTP server on the specified port (default: 8080)
- Launches a Cloudflare tunnel for public access
- Provides a public URL that can be used with any MCP client
- Eliminates the need for manual tunnel setup scripts
Requirements: Install cloudflared from Cloudflare
Example:
vibeteam-mcp --port 9000
# Output: π VibeTeam MCP server is publicly accessible at: https://example-123.trycloudflare.com
# To disable tunnel mode:
vibeteam-mcp --no-tunnelThe integrated tunnel approach eliminates the need for manual deployment scripts. For advanced deployment scenarios, see DEPLOYMENT.md.
python -m cli.main_cli startpython -m cli.main_cli execute "Create a REST API with FastAPI and SQLAlchemy"Run all tests:
pytest tests/Run specific test categories:
# Core functionality tests
pytest tests/test_*.py
# MCP server tests
pytest tests/mcp/
# Cloudflare tunnel tests
pytest tests/tunnel/Note: Some tests require API keys to be set. Tests are automatically run via GitHub Actions on push/PR.
- Claude Code Agent: Primary coding agent using Anthropic Claude via claude-code-sdk
- Engineering Manager: Task orchestration and quality control
- MCP Server: Standard Model Context Protocol implementation for external AI integration
- Task Automation: File-based task management with
tasks.md - Reflection Module: Optional OpenAI-powered quality analysis and improvement suggestions
- Deployment Infrastructure: Docker, Cloudflare Tunnel, Kubernetes support
VibeTeam/
βββ agents/ # Agent implementations
β βββ claude_code_agent.py # Primary coding agent
β βββ engineering_manager.py # Task orchestration
β βββ base_agent.py # Base agent class
βββ mcp/ # Model Context Protocol server
β βββ vibeteam_mcp_server.py # Main MCP implementation
β βββ stdio_server.py # Standard MCP protocol
βββ cli/ # Command-line interface
βββ tests/ # Comprehensive test suite
βββ deploy/ # Deployment configurations
β βββ cloudflare/ # Cloudflare Tunnel setup
β βββ k8s/ # Kubernetes manifests
β βββ local/ # Local development
βββ vibeteam_tasks.py # Main task automation script
- Task Input: Create tasks in
tasks.mdwith checkbox format[ ] Task description - Execution: Run
vibeteam-taskto automatically process unchecked tasks - AI Processing: Claude Code Agent analyzes task and generates solution
- Quality Control: Optional OpenAI reflection provides analysis and suggestions
- Testing: Automatically creates and runs tests for generated code
- Git Integration: Reviews changes and commits completed work
- Task Completion: Marks tasks as done
[x]intasks.md
- Deployment Guide: Complete deployment options and setup
- MCP Server Guide: Model Context Protocol implementation details
- Testing Guide: How to run and write tests
- GitHub Actions: Automated CI/CD workflows
Required:
ANTHROPIC_API_KEY="your-anthropic-key" # For Claude Code AgentOptional:
OPENAI_API_KEY="your-openai-key" # For reflection analysis
OPENAI_BASE_URL="https://api.openai.com/v1" # Custom OpenAI endpoint
VIBETEAM_WORKING_DIR="/path/to/project" # Default working directory- API Key Missing: Set
ANTHROPIC_API_KEYenvironment variable - Task Timeout: Tasks may take several minutes to complete
- Git Issues: Ensure git is configured and working directory is a git repo
- Test Failures: Some tests require internet access and API keys
Enable detailed logging:
vibeteam-task --debug- MCP server logs:
vibeteam-mcp.log - Task execution logs:
mcp_server.log
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests if applicable
- Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
MIT License - see LICENSE file for details
- Built with claude-code-sdk-python for AI agent functionality
- Model Context Protocol (MCP) for standardized AI integration
- OpenAI API for reflection and quality analysis
- Docker and Cloudflare for deployment infrastructure
