Skip to content

Forked prototype with working Ollama/TinyLlama, PostgreSQL, Next.js, FastAPI, RAG, etc.

License

Notifications You must be signed in to change notification settings

givecoffee/Secure-Internal-Chatbot-Design

 
 

Repository files navigation

AI Opportunity Center Chatbot

Team Members

Bradley Charles
Rae Maffei
Livan Hagi Osman
Mark Yosinao

Project Overview

We are designing a prototype for a secure, in-house chatbot system with the focus on compliance, security, data privacy, and protecting intellectual property that are typically at risk with AI chatbots. Our proposed solution will include research, a detailed project report, and a final deliverable of a simple, working prototype. If possible, we wanted it to have multilingual capabilities.

Requirements:

  • Compliance with HIPAA, FERPA, etc.
  • Protection for Intellectual Property
  • Run on internal, in-house infrastructure
  • Provide articulate and helpful responses based on approved knowledge
  • Use authentication tools to secure interactions
  • Maintain audit logs as per compliance regulations

Tech Stack

  1. Frontend
    • TypeScript
    • Next.js
  2. Backend
    • Python
    • FastAPI
  3. LLM
    • Ollama (TinyLlama-1.1B-Chat)
  4. Authentication
    • Basic token support (WIP)

Setup & Installation

Prerequisites

  • Python 3.12+
  • Node.js 24+ (with npm)
  • Git

1) Clone the repo

git clone https://github.com/OC-Chatbot/Secure-Internal-Chatbot-Design.git
cd Secure-Internal-Chatbot-Design

2) Backend setup (FastAPI)

python -m venv .venv
source .venv/bin/activate  # Windows: .venv\\Scripts\\activate
pip install -r backend/requirements.txt

3) Frontend setup (Next.js)

npm install

4) Run both services

Recommended (single command):

python start_services.py

This starts uvicorn backend.main:app on port 8000 and next dev on port 3000. It also sets NEXT_PUBLIC_API_URL to http://localhost:8000/api if unset.

Manual alternative:

# Terminal 1
source .venv/bin/activate
uvicorn backend.main:app --reload --host 0.0.0.0 --port 8000

# Terminal 2
npm run dev -- --port 3000

5) Access the app

Notes

  • The backend loads the TinyLlama model from Hugging Face; first run requires network access or a pre-cached model in your HF cache.
  • Configure NEXT_PUBLIC_API_URL if your backend runs on a different host/port. You can export it before npm run dev or set it in .env.local.

Auto Start on VPS Reboot

The application runs automatically on VPS reboot via systemd services.

Backend Service

  • File: /etc/systemd/system/oc-backend.service
  • Start: sudo systemctl start oc-backend.service
  • Status: sudo systemctl status oc-backend.service

Frontend Service

  • File: /etc/systemd/system/oc-frontend.service
  • Start: sudo systemctl start oc-frontend.service
  • Status: sudo systemctl status oc-frontend.service

To Install on New Server

cd /root/Secure-Internal-Chatbot-Design

# Create backend service
sudo nano /etc/systemd/system/oc-backend.service
# [Paste backend service config]

# Create frontend service
sudo nano /etc/systemd/system/oc-frontend.service
# [Paste frontend service config]

# Enable and start
sudo systemctl daemon-reload
sudo systemctl enable oc-backend.service oc-frontend.service
sudo systemctl start oc-backend.service oc-frontend.service

About

Forked prototype with working Ollama/TinyLlama, PostgreSQL, Next.js, FastAPI, RAG, etc.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 78.8%
  • Python 19.9%
  • Other 1.3%