Skip to content

Priyap1038/ChatBot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

7 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– RAG Chatbot β€” Acme Tech Solutions

A production-ready Retrieval-Augmented Generation (RAG) Chatbot built with:

  • 🐍 Backend β€” FastAPI + LangChain + Pinecone (vector search) + SQLite (chat history) + OpenAI
  • βš›οΈ Frontend β€” React 19 + Vite + Tailwind CSS

πŸ“ Project Structure

RAG_Chatbot/
β”œβ”€β”€ rag_chatbot/          # Python FastAPI backend
β”‚   β”œβ”€β”€ routes/           # API route handlers (chat, history, session, ingest)
β”‚   β”œβ”€β”€ middleware/       # Auth middleware
β”‚   β”œβ”€β”€ docs/             # Source documents for RAG ingestion
β”‚   β”œβ”€β”€ main.py           # FastAPI app entry point
β”‚   β”œβ”€β”€ config.py         # Configuration (reads .env)
β”‚   β”œβ”€β”€ vectorstore.py    # Pinecone vector store logic + local BM25 fallback
β”‚   β”œβ”€β”€ embeddings.py     # OpenAI embedding setup
β”‚   β”œβ”€β”€ memory.py         # SQLite chat session management
β”‚   β”œβ”€β”€ ingest_docs.py    # One-shot script to load docs into Pinecone
β”‚   β”œβ”€β”€ requirements.txt  # Python dependencies
β”‚   └── .env.example      # Backend env template
β”‚
β”œβ”€β”€ rag_frontend/         # React + Vite frontend
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ components/   # ChatWindow, Sidebar, MessageBubble
β”‚   β”‚   β”œβ”€β”€ App.jsx       # Root component
β”‚   β”‚   β”œβ”€β”€ api.js        # Axios API calls to backend
β”‚   β”‚   └── index.css     # Global styles
β”‚   β”œβ”€β”€ package.json
β”‚   └── .env.example      # Frontend env template
β”‚
β”œβ”€β”€ .gitignore
└── README.md

βœ… Prerequisites

Make sure you have the following installed:

Tool Version Download
Python β‰₯ 3.10 python.org
Node.js β‰₯ 18.x nodejs.org
Git any git-scm.com

You will also need accounts / API keys for:


πŸš€ Getting Started

1. Clone the Repository

git clone https://github.com/Priyap1038/ChatBot.git
cd ChatBot

2. Backend Setup (rag_chatbot/)

a) Create a virtual environment

cd rag_chatbot

# Windows
python -m venv venv
venv\Scripts\activate

# macOS / Linux
python3 -m venv venv
source venv/bin/activate

b) Install dependencies

pip install -r requirements.txt

c) Configure environment variables

# Copy the example file
copy .env.example .env        # Windows
cp .env.example .env          # macOS / Linux

Now open .env and fill in your values:

OPENAI_API_KEY=sk-...                   # Your OpenAI API key
PINECONE_API_KEY=pcsk_...              # Your Pinecone API key
PINECONE_INDEX_NAME=priya-rag-index    # Your Pinecone index name
CORS_ORIGINS=*                         # Use * for local dev
RATE_LIMIT=30/minute
LOG_LEVEL=INFO

d) Create Pinecone Index

⚠️ CRITICAL: Create your index in Pinecone with Dimensions = 1536 and Metric = cosine. This matches the default OpenAI text-embedding-3-small model output.

e) Ingest documents into Pinecone

This step uploads your documents in docs/ into the Pinecone vector store and builds the local BM25 search corpus (bm25_corpus.json). Run this once before starting the server (or whenever you physically add new files to docs/). The local search state survives restarts!

python ingest_docs.py

f) Start the backend server

uvicorn main:app --reload --port 8000

The API will be live at: http://localhost:8000


3. Frontend Setup (rag_frontend/)

Open a new terminal and run:

a) Install Node dependencies

cd rag_frontend
npm install

b) Configure environment variables

# Windows
copy .env.example .env

# macOS / Linux
cp .env.example .env

For local development, the default .env values work out of the box (Vite proxies /api/* β†’ localhost:8000):

VITE_API_URL=       # Leave empty for local dev
VITE_API_KEY=       # Leave empty unless backend API_KEY is set

c) Start the frontend dev server

npm run dev

The app will be live at: http://localhost:5173


πŸ–₯️ Running Both Together (Quick Start)

Open two terminals side-by-side:

Terminal 1 β€” Backend Terminal 2 β€” Frontend
cd rag_chatbot cd rag_frontend
venv\Scripts\activate npm install
uvicorn main:app --reload npm run dev

Then open http://localhost:5173 in your browser. πŸŽ‰


πŸ“„ Adding Your Own Documents

  1. Place your .md, .txt, or .pdf files inside rag_chatbot/docs/
  2. Re-run the ingestion script:
    cd rag_chatbot
    python ingest_docs.py
  3. Restart the backend server.

πŸ”Œ API Endpoints

Method Endpoint Description
GET /api/health Health check
GET /api/sessions List all chat sessions
POST /api/chat Send a message
GET /api/history/{session_id} Get chat history for a session
POST /api/session Create a new session
POST /api/ingest Ingest a document via API

Full interactive docs: http://localhost:8000/docs


πŸ—οΈ Production Build (Frontend)

cd rag_frontend
npm run build

Output will be in rag_frontend/dist/. Serve it with any static host (Vercel, Netlify, etc.).

For the backend, update .env:

CORS_ORIGINS=https://yourfrontend.com
API_KEY=your-strong-secret-key

πŸ› οΈ Troubleshooting

Problem Fix
ModuleNotFoundError Make sure your venv is activated and pip install -r requirements.txt was run
Pinecone Dimension mismatch error Delete the index and recreate it with dimension = 1536
OpenAI 401 / Quota Exceeded Check your OpenAI billing page to ensure your key has credits
Empty responses / no context Run python ingest_docs.py to populate Pinecone and the local database
CORS errors in browser Ensure CORS_ORIGINS=* is set in backend .env during development
Frontend can't reach backend Make sure backend is running on port 8000 and frontend on 5173

πŸ“œ License

This project is for educational / internal use. Feel free to fork and adapt!

About

This project is an AI-powered chatbot application designed to provide intelligent and context-aware responses to user queries. Built using FastAPI, LangChain, Pinecone, and OpenAI with a React + Vite + Tailwind frontend, implementing vector search and retrieval-augmented generation (RAG) with SQLite-based chat history for contextual conversations.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors