Skip to content

Manshi4952/AI-Code-Debugging-Agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

13 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧠 DebugBrain β€” AI Code Debugging Agent with Vector Memory

DebugBrain is a next-generation AI-powered debugging assistant that doesn't just find bugs β€” it remembers them. By leveraging a hybrid memory engine combining Local JSON persistence and Hindsight Cloud vector storage, DebugBrain learns your unique coding patterns and delivers personalized, context-aware fixes based on your own debugging history.


πŸ“Œ Table of Contents


✨ Features

Feature Description
πŸ” Vector Memory Powered by Hindsight 2.0 to recall similar bugs via semantic search across all your past sessions.
πŸ› οΈ Auto-Fix Engine Generates corrected code with step-by-step logic explanations powered by Groq (Llama 3.1).
πŸ” Pattern Recognition Detects recurring errors and alerts you (e.g., "This is your 4th KeyError this week").
🧡 Thread-Safe Sync Non-blocking cloud synchronization using FastAPI concurrency β€” zero latency on your UI.
πŸ“Š Quality Scoring Provides a 1–10 code quality score with actionable, prioritized improvement tips.
πŸ“… Debug Timeline Searchable history of every debugging session, stored locally and synced to the cloud.

πŸ—‚οΈ Project Structure

debugbrain/
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ main.py              # FastAPI application & API routing
β”‚   β”œβ”€β”€ analyzer.py          # Groq LLM logic & prompt engineering
β”‚   β”œβ”€β”€ memory.py            # Hybrid Memory Manager (JSON + Hindsight 2.0)
β”‚   β”œβ”€β”€ requirements.txt     # Python dependencies
β”‚   └── .env                 # API keys and cloud URLs (not committed)
β”‚
β”œβ”€β”€ frontend/
β”‚   └── src/
β”‚       β”œβ”€β”€ App.jsx           # Application state & core logic
β”‚       β”œβ”€β”€ components/       # Monaco Editor & Results panel components
β”‚       └── utils/api.js      # Axios config for backend communication
β”‚
└── README.md

πŸ› οΈ Tech Stack

Layer Technology
LLM Llama 3.1 via Groq
Vector Memory Hindsight 2.0
Local Memory JSON flat-file persistence
Backend FastAPI, Uvicorn, Pydantic v2
Frontend React 18, Vite, Monaco Editor
Backend Hosting Render
Frontend Hosting Vercel

βœ… Prerequisites

Make sure you have the following installed before running DebugBrain:


πŸš€ Quick Start (Local)

1. Clone the Repository

git clone https://github.com/Manshi4952/AI-Code-Debugging-Agent.git
cd AI-Code-Debugging-Agent

2. Configure Environment Variables

Create a .env file inside the backend/ directory:

cp backend/.env.example backend/.env

Then open backend/.env and fill in your keys:

GROQ_API_KEY=gsk_your_groq_key_here
HINDSIGHT_API_KEY=hsk_your_hindsight_key_here
HINDSIGHT_API_URL=https://api.hindsight.vectorize.io

3. Setup & Run the Backend

cd backend

# Create and activate a virtual environment
python3 -m venv venv
source venv/bin/activate        # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Start the FastAPI server
python3 main.py

The backend will be available at http://localhost:8000.

4. Setup & Run the Frontend

Open a new terminal:

cd frontend

# Install dependencies
npm install

# Start the development server
npm run dev

The frontend will be available at http://localhost:5173.


βš™οΈ Environment Variables

All backend configuration is handled through environment variables. Set these in backend/.env:

Variable Description Example
GROQ_API_KEY Your Groq secret key for LLM access gsk_...
HINDSIGHT_API_KEY Your Hindsight Personal Access Token hsk_...
HINDSIGHT_API_URL Hindsight cloud endpoint https://api.hindsight.vectorize.io

⚠️ Never commit your .env file. It is listed in .gitignore by default.


🧠 Hybrid Memory System

DebugBrain uses a dual-layer memory architecture to ensure your debugging context is always fast, persistent, and intelligent.

                        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  User submits code ──► β”‚   FastAPI /analyze route  β”‚
                        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                     β”‚
               β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
               β–Ό                                             β–Ό
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚  Local Layer (JSON)     β”‚              β”‚  Cloud Layer (Hindsight 2.0)  β”‚
  β”‚  - Frequency counting   β”‚              β”‚  - Semantic vector search     β”‚
  β”‚  - UI debug timeline    β”‚              β”‚  - .retain() to store memory  β”‚
  β”‚  data/memory/<uid>.json β”‚              β”‚  - .recall() to find matches  β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Layer 1 β€” Local JSON: High-speed persistence used for frequency counting (pattern detection) and rendering the debug timeline in the UI. Stored at data/memory/<user_id>.json.

Layer 2 β€” Hindsight 2.0 (Vector DB): Enables semantic search across all past debug sessions. The AI can surface matches like: "I remember you had a similar NullPointerException in a different project two weeks ago..."

Thread Safety: The memory engine uses fastapi.concurrency.run_in_threadpool to ensure cloud uploads are always non-blocking, keeping API responses snappy regardless of cloud latency.


πŸ”Œ API Reference

Base URL (local): http://localhost:8000

Endpoint Method Description
/analyze POST Submit code for analysis. Syncs to Hindsight and recalls semantically similar past fixes.
/history/{user_id} GET Fetch the visual debug timeline for a specific user.
/memories/{user_id} GET Retrieve the most frequent bug patterns from the user's memory bank.
/clear/{user_id} DELETE Wipe all local and cloud memory for a user β€” clean slate mode.

Example Request β€” /analyze

curl -X POST http://localhost:8000/analyze \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "user_123",
    "code": "def divide(a, b):\n    return a / b\n\ndivide(10, 0)",
    "language": "python"
  }'

☁️ Deployment

Backend β€” Render

  1. Push your code to GitHub.
  2. Create a new Web Service on Render.
  3. Set the build command to pip install -r requirements.txt.
  4. Set the start command to python3 main.py.
  5. Add all environment variables (GROQ_API_KEY, HINDSIGHT_API_KEY, HINDSIGHT_API_URL) in the Render dashboard under Environment.

Frontend β€” Vercel

  1. Import the repository on Vercel.
  2. Set the root directory to frontend/.
  3. Update frontend/src/utils/api.js to point to your Render backend URL.
  4. Deploy β€” Vercel handles everything else automatically.

πŸš€ Live Demo


🀝 Team

Name Role
Manshi Kumari Shaw Team Leader & Full-Stack Lead
Nandani Contributor
Laxmi Contributor
Manisha Contributor

πŸ“„ License

This project is open source. Feel free to use, modify, and distribute it. Contributions via pull requests are welcome!


Built with ❀️ by Team DebugBrain

Releases

No releases published

Packages

 
 
 

Contributors