An AI-powered code review service that automatically analyzes GitHub pull requests using FastAPI, Celery, and OpenAI/Ollama.
This service provides automated code review capabilities by:
- Fetching PR diffs and files from GitHub
- Analyzing code using AI (OpenAI GPT or local Ollama)
- Returning structured feedback with issues and suggestions
- Processing requests asynchronously with Celery workers
- 🔍 AI Code Analysis - Uses OpenAI GPT or local Ollama models
- ⚡ Async Processing - FastAPI + Celery for scalable background tasks
- 🔒 API Protection - Rate limiting and optional API key authentication
- 🐳 Docker Ready - Complete Docker Compose setup
- ☁️ Cloud Deployable - Railway, Render, Fly.io configurations
- 📊 Structured Results - JSON responses with categorized issues
- 🚀 Production Ready - Health checks, logging, and monitoring
- Python 3.9+
- Docker & Docker Compose
- OpenAI API key OR Ollama (for local AI)
-
Clone and setup:
git clone https://github.com/priyanshupardhi/code-review-ai.git cd code-review-ai -
Create environment file:
# Copy example environment file cp env.example .env # Edit .env with your configuration nano .env
-
Run with Docker:
docker-compose up --build
-
Test the API:
curl -X POST "http://localhost:8000/api/analyze-pr" \ -H "Content-Type: application/json" \ -d '{"repo_url": "https://github.com/user/repo", "pr_number": 1}'
- Local:
http://localhost:8000 - Production:
https://your-app.railway.app
- Optional API Key: Include
X-API-Keyheader for protection - Rate Limiting: 2 requests per minute (configurable)
GET /healthResponse:
{
"status": "healthy",
"service": "code-review-agent"
}POST /api/analyze-pr
Content-Type: application/json
X-API-Key: your-api-key (optional)Request Body:
{
"repo_url": "https://github.com/user/repo",
"pr_number": 1,
"github_token": "optional-github-token"
}Response:
{
"task_id": "abc123-def456-ghi789"
}GET /api/status/{task_id}Response:
{
"task_id": "abc123-def456-ghi789",
"status": "PROCESSING" // PENDING, PROCESSING, SUCCESS, FAILURE
}GET /api/results/{task_id}Response:
{
"task_id": "abc123-def456-ghi789",
"status": "completed",
"results": {
"files": [
{
"name": "main.py",
"issues": [
{
"type": "style",
"line": 15,
"description": "Line too long",
"suggestion": "Break line into multiple lines",
"severity": "low"
}
]
}
],
"summary": {
"total_files": 1,
"total_issues": 1,
"critical_issues": 0
}
}
}{
"detail": "Invalid request data"
}{
"detail": "Invalid or missing API key"
}{
"detail": "Rate limit exceeded. Try again later."
}{
"detail": "Results not ready or task failed"
}- Connect your GitHub repo to Railway
- Add Redis service from marketplace
- Set environment variables:
OPENAI_API_KEYCELERY_BROKER_URL(from Redis service)CELERY_RESULT_BACKEND(from Redis service)REDIS_URL(from Redis service)SECRET_KEY(generate a strong key)
- Render: Use
render.yamlconfiguration - Fly.io: Use
fly.tomlconfiguration - Any Docker platform: Use
Dockerfile.prod
- FastAPI: Modern, fast web framework with automatic API documentation
- Celery: Robust task queue for async processing of AI analysis
- Redis: Lightweight broker and result backend for Celery
- Plugin Pattern: Easy to switch between OpenAI and Ollama
- Cost Optimization: Default to
gpt-3.5-turbofor free tier compatibility - Local Option: Ollama support for privacy-sensitive environments
- Rate Limiting: Prevents abuse and manages costs
- API Key Protection: Optional authentication layer
- Environment Validation: Pydantic settings with validation
- Secret Management: Secure handling of API keys and tokens
- Health Checks:
/healthendpoint for monitoring - Structured Logging: JSON logs for observability
- Docker Optimization: Multi-stage builds and non-root users
- Cloud Native: Railway, Render, Fly.io configurations
Client Request → FastAPI → Celery Task → GitHub API → AI Analysis → Redis Storage → Client Response
- Graceful Degradation: Service continues if AI provider fails
- Retry Logic: Built into Celery for transient failures
- User Feedback: Clear error messages and status codes
- Logging: Comprehensive error tracking for debugging
| Variable | Required | Default | Description |
|---|---|---|---|
OPENAI_API_KEY |
Yes* | - | OpenAI API key for AI analysis |
CELERY_BROKER_URL |
Yes | - | Redis URL for Celery broker |
CELERY_RESULT_BACKEND |
Yes | - | Redis URL for Celery results |
REDIS_URL |
Yes | - | Redis connection URL |
GITHUB_TOKEN |
No | - | GitHub token for private repos |
LLM_PROVIDER |
No | openai |
openai or ollama |
API_KEYS |
No | - | Comma-separated API keys for protection |
RATELIMIT_PER_MINUTE |
No | 2 |
Rate limit (requests per minute) |
SECRET_KEY |
No | Generated | Secret key for security |
ALLOWED_ORIGINS |
No | * |
CORS allowed origins |
*Required unless using Ollama
See env.example for a complete configuration template.
- Caching Layer: Redis-based caching for repeated PR analysis
- Webhook Integration: Auto-trigger analysis on GitHub PR events
- Language-Specific Linters: Integration with ESLint, Pylint, etc.
- Streaming Results: Real-time progress updates via WebSockets
- Enhanced Security: JWT authentication and RBAC
- Multi-Repository Support: Batch analysis across multiple repos
- Custom Rules Engine: User-defined analysis rules and patterns
- Metrics Dashboard: Usage analytics and performance monitoring
- CI/CD Integration: GitHub Actions and GitLab CI plugins
- Advanced AI Models: Support for Claude, Gemini, and local models
- Machine Learning Pipeline: Custom models trained on code patterns
- Collaborative Features: Team-based review workflows
- Enterprise Features: SSO, audit logs, and compliance reporting
- Mobile App: Native mobile interface for code reviews
- Plugin Ecosystem: Third-party integrations and extensions
- Database Migration: PostgreSQL for persistent storage
- Microservices: Split into smaller, focused services
- API Versioning: Semantic versioning for API compatibility
- Performance Optimization: Async file processing and memory management
- Testing Coverage: Comprehensive unit and integration tests
{
"task_id": "abc123",
"status": "completed",
"results": {
"files": [
{
"name": "main.py",
"issues": [
{
"type": "style",
"line": 15,
"description": "Line too long",
"suggestion": "Break line into multiple lines",
"severity": "low"
}
]
}
],
"summary": {
"total_files": 1,
"total_issues": 1,
"critical_issues": 0
}
}
}This project is currently in active development. For questions or suggestions, please open an issue on GitHub.
MIT License - see LICENSE file for details.