An intelligent, multi-agent system for automated code review of GitHub Pull Requests. This system uses multiple specialized AI agents to analyze code changes and provide comprehensive, actionable review comments.
-
Multi-Agent Architecture: Four specialized agents working together:
- Logic Agent: Identifies logical errors, bugs, and correctness issues
- Readability Agent: Analyzes code readability, maintainability, and style
- Performance Agent: Detects performance bottlenecks and optimization opportunities
- Security Agent: Identifies security vulnerabilities and best practices
-
Flexible Input Methods:
- GitHub PR URL (automatic fetching)
- Manual diff text input
- Direct API calls
-
Comprehensive Analysis:
- Structured review comments with severity levels
- Code suggestions and improvements
- File-level and line-level analysis
- Summary statistics and agent reports
-
Modern Tech Stack:
- FastAPI for RESTful API
- LangChain for LLM orchestration
- Streamlit for user-friendly UI
- Llama3 via Groq API for intelligent analysis (fast and cost-effective)
- Python 3.8+
- Groq API key (get one from https://console.groq.com/)
- GitHub token (optional, for fetching PRs directly)
# Navigate to project directory
cd "pr reviwer"
# Create virtual environment
python -m venv venv
# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On Linux/Mac:
source venv/bin/activate
# Install dependencies
pip install -r requirements.txtCreate a .env file in the project root:
cp .env.example .envEdit .env and add your credentials:
GROQ_API_KEY=your_groq_api_key_here
LLM_MODEL=llama-3.1-70b-versatile
LLM_TEMPERATURE=0.3
GITHUB_TOKEN=your_github_token_here # OptionalAvailable Groq Llama3 Models:
llama-3.1-70b-versatile(recommended, best quality)llama-3.1-8b-instant(faster, good quality)llama-3-70b-8192llama-3-8b-8192mixtral-8x7b-32768
# From project root
python -m app.main
# Or using uvicorn directly
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000The API will be available at http://localhost:8000
# In a new terminal (with venv activated)
streamlit run ui/streamlit_app.pyThe UI will open in your browser at http://localhost:8501
Visit http://localhost:8000/docs for interactive API documentation (Swagger UI)
- Start the FastAPI backend (if not already running)
- Start the Streamlit UI
- Choose review mode:
- GitHub PR URL: Enter a PR URL like
https://github.com/owner/repo/pull/123 - Manual Diff Input: Paste your diff text directly
- GitHub PR URL: Enter a PR URL like
- Select which agents to enable
- Click "Review PR" or "Review Diff"
- View results with filtering options
curl -X POST "http://localhost:8000/review" \
-H "Content-Type: application/json" \
-d '{
"pr_url": "https://github.com/owner/repo/pull/123",
"enable_agents": {
"logic": true,
"readability": true,
"performance": true,
"security": true
}
}'curl -X POST "http://localhost:8000/review" \
-H "Content-Type: application/json" \
-d '{
"diff_text": "--- a/file.py\n+++ b/file.py\n@@ -1,3 +1,4 @@\n def test():\n pass\n+ return None",
"file_path": "file.py"
}'curl -X POST "http://localhost:8000/review/async" \
-H "Content-Type: application/json" \
-d '{
"pr_url": "https://github.com/owner/repo/pull/123"
}'pr-reviewer/
βββ app/
β βββ __init__.py
β βββ main.py # FastAPI application
β βββ config.py # Configuration settings
β βββ agents/
β β βββ __init__.py
β β βββ base_agent.py # Base agent class
β β βββ logic_agent.py # Logic review agent
β β βββ readability_agent.py # Readability review agent
β β βββ performance_agent.py # Performance review agent
β β βββ security_agent.py # Security review agent
β βββ services/
β β βββ __init__.py
β β βββ github_service.py # GitHub API integration
β β βββ diff_parser.py # Diff parsing utilities
β β βββ review_orchestrator.py # Multi-agent orchestration
β βββ models/
β βββ __init__.py
β βββ schemas.py # Pydantic models
βββ ui/
β βββ streamlit_app.py # Streamlit UI
βββ requirements.txt
βββ .env.example
βββ .gitignore
βββ README.md
OPENAI_API_KEY: Your OpenAI API key (required)GITHUB_TOKEN: GitHub personal access token (optional, for PR fetching)LLM_MODEL: LLM model to use (default:gpt-4)LLM_TEMPERATURE: Temperature for LLM (default:0.3)MAX_TOKENS: Maximum tokens per agent response (default:2000)
You can enable/disable specific agents per request:
{
"enable_agents": {
"logic": true,
"readability": true,
"performance": false,
"security": true
}
}The API returns structured review data:
{
"pr_url": "https://github.com/owner/repo/pull/123",
"comments": [
{
"line_number": 42,
"file_path": "src/main.py",
"category": "security",
"severity": "critical",
"message": "SQL injection vulnerability detected",
"suggestion": "Use parameterized queries",
"code_snippet": "query = f'SELECT * FROM users WHERE id = {user_id}'"
}
],
"summary": {
"total_comments": 15,
"critical_issues": 2,
"high_issues": 5,
"medium_issues": 6,
"low_issues": 2,
"categories": {
"security": 2,
"logic": 5,
"readability": 6,
"performance": 2
}
},
"agent_reports": {
"security": {
"total_comments": 2,
"files_reviewed": 1
}
}
}curl http://localhost:8000/healthcurl -X POST "http://localhost:8000/review" \
-H "Content-Type: application/json" \
-d '{
"diff_text": "--- a/test.py\n+++ b/test.py\n@@ -1,2 +1,3 @@\n def hello():\n print(\"Hello\")\n+ return None"
}'- Input Processing: System accepts PR URL or manual diff text
- Diff Parsing: Extracts file-level changes and line numbers
- Multi-Agent Analysis: Each enabled agent analyzes the code:
- Logic Agent checks for bugs and logical errors
- Readability Agent evaluates code quality
- Performance Agent identifies bottlenecks
- Security Agent scans for vulnerabilities
- Orchestration: Review orchestrator coordinates agents and aggregates results
- Response Generation: Structured comments with severity, suggestions, and code snippets
- Create a new agent class in
app/agents/:
from app.agents.base_agent import BaseAgent
from app.models.schemas import ReviewCategory
class CustomAgent(BaseAgent):
def __init__(self):
super().__init__("CustomAgent", ReviewCategory.BEST_PRACTICES)
def _get_system_prompt(self) -> str:
return "Your system prompt here..."
def _get_review_prompt(self, diff_content: str, file_path: str) -> str:
return f"Your review prompt here..."- Register it in
review_orchestrator.py:
from app.agents.custom_agent import CustomAgent
self.agents = {
...
'custom': CustomAgent()
}- The system uses Llama3 via Groq API for fast and cost-effective code reviews.
- Groq provides extremely fast inference speeds (often 10x faster than traditional APIs).
- Get your free Groq API key from https://console.groq.com/
- For large PRs, use the async endpoint (
/review/async) for better performance. - GitHub token is optional but recommended for accessing private repositories.
- Review quality depends on the model used (
llama-3.1-70b-versatilerecommended for best results).
This is a modular system designed for easy extension. Feel free to:
- Add new specialized agents
- Improve diff parsing
- Enhance UI features
- Add support for other LLM providers
This project is provided as-is for educational and development purposes.
- Ensure
OPENAI_API_KEYis set in.env - Check that your API key has sufficient credits
- Verify
GITHUB_TOKENis valid - Check repository access permissions
- Ensure virtual environment is activated
- Run
pip install -r requirements.txtagain
- Change port in
app/main.pyor use--portflag with uvicorn
Once the server is running, visit:
- Swagger UI:
http://localhost:8000/docs - ReDoc:
http://localhost:8000/redoc