AI-powered system for detecting deepfake images with Grad-CAM visual explanations.
- FastAPI backend with EfficientNet-B0 model and Grad-CAM support
- React + TypeScript frontend with modern landing & detection pages
- REST API (
/health,/detect,/reports) with proper resource-based routing - PDF Report Generation - Download comprehensive reports with analyzed image, detection results, Grad-CAM visualization, and interpretation guide
- Results stored as files and served via
/results/<filename> - Dockerized backend & frontend, each with dedicated
docker-compose.yml
ai_image_detection/
├── backend/
│ ├── app/ # FastAPI application package
│ ├── Dockerfile # Backend container build
│ ├── requirements.txt # Python dependencies
│ └── run.py # Entry point (uvicorn wrapper)
├── frontend/
│ ├── src/ # React application source
│ ├── Dockerfile # Frontend container build
│ └── nginx.conf # Production web server config
├── docker-compose.yml # Backend + frontend stack
├── .gitignore
└── results/ # Generated Grad-CAM images (gitignored)
- Python 3.11+
- Node.js 18+
- Docker 24+ / Docker Compose v2
- (Optional) Trained weights
best_efficientnet_model.pthinbackend/
- Run with Docker Compose
- Backend:
cd backend→docker compose up --build - Frontend (new terminal):
cd frontend→docker compose up --build - Access API at http://localhost:8000 and UI at http://localhost:8080
- Stop each service with
docker compose down
- Backend:
- Run locally without Docker
- Copy
backend/.env.exampleto.envand adjust values as needed - Start backend (FastAPI) using the virtualenv instructions below
- Copy
frontend/.env.exampleto.envif you need a customVITE_API_URL - Start frontend with Vite dev server; it proxies
/apitohttp://localhost:8000
- Copy
cd backend
python -m venv .venv
source .venv/bin/activate # Windows: .\.venv\Scripts\activate
pip install -r requirements.txt
cp .env.example .env # Configure MODEL_CHECKPOINT_PATH if needed
python run.pyBackend API: http://localhost:8000
Docs (Swagger): http://localhost:8000/docs
cd frontend
npm install
cp .env.example .env # Optional: override VITE_API_URL
npm run devFrontend: http://localhost:3000 (proxies API to http://localhost:8000)
cd backend
docker build -t deepfake-backend .
docker run --rm -p 8000:8000 deepfake-backendcd frontend
docker build -t deepfake-frontend .
docker run --rm -p 8080:8080 \
-e NGINX_BACKEND_URL=http://localhost:8000/ \
deepfake-frontendFrontend will be served at http://localhost:8080. Configure the API URL at build time if you need a different backend endpoint:
docker build -t deepfake-frontend \
--build-arg VITE_API_URL=http://localhost:8000 \
.cd backend
docker compose up --buildBackend: http://localhost:8000
Stop service:
docker compose downcd frontend
docker compose up --buildFrontend: http://localhost:8080
Stop service:
docker compose downcurl -X GET "http://localhost:8000/health"curl -X POST "http://localhost:8000/detect" \
-F "file=@path/to/image.jpg"Response includes a session_id for PDF report generation.
curl -X GET "http://localhost:8000/reports/report/{session_id}" \
--output report.pdfcurl -X GET "http://localhost:8000/detect/stats"Backend uses .env (see backend/.env.example). Key settings include:
MODEL_CHECKPOINT_PATHRESULTS_DIRCORS_ORIGINS
Result files are cleaned automatically every day at midnight server time.
Frontend build-time variable:
VITE_API_URL(defaults to/apiin Docker image,/apiproxies to backend)
- Ensure
best_efficientnet_model.pthis excluded from git (already via.gitignore). - Run linting/tests as needed.
- Commit all relevant source files and documentation.
Example initial commit:
git init
git add .
git commit -m "feat: add deepfake detection platform"Push to GitHub:
git remote add origin https://github.com/<username>/ai_image_detection.git
git branch -M main
git push -u origin mainnpm run lint(frontend linting)npm run build(frontend production build)pip install -r requirements.txt(backend dependencies)python run.py(start backend API)docker compose logs -f(follow container logs)
Add your preferred license (e.g., MIT) before publishing publicly.