Two-part Python project:
- Scanner CLI: runs standalone SAST scans on a folder and prints JSON findings
- Management server: launches scans, ingests results into SQLite, and provides a minimal web UI
- Create venv and install
python3 -m venv .venv && source .venv/bin/activate
pip install -e .- Configure environment
cp env.example .env
export $(grep -v '^#' .env | xargs) # or use a shell that auto-loads .env- Run scanner (standalone)
codewopr-scanner scan --path /path/to/repo --model gpt-4o-miniPost scan results to the running console (auto-creates/updates the Project):
# The CLI still prints JSON to stdout; posting is optional.
codewopr-scanner scan \
--path /path/to/repo \
--model gpt-4o-mini \
--concurrency 8 \
--include-related \
--post-to http://localhost:3000 \
--project-name my-repoScan only the files changed on a branch (diff vs merge-base with HEAD):
codewopr-scanner scan \
--path /path/to/repo \
--model gpt-4o-mini \
--branch Feature7 \
--concurrency 8 \
--post-to http://localhost:3000 \
--project-name my-repoScan only the files changed between two refs (explicit base/head):
codewopr-scanner scan \
--path /path/to/repo \
--model gpt-4o-mini \
--git-base origin/main --git-head Feature7 \
--post-to http://localhost:3000 \
--project-name my-repoScan only the files in a GitHub PR (uses GitHub API; set GITHUB_TOKEN or GH_TOKEN if needed):
codewopr-scanner scan \
--path /path/to/repo \
--model gpt-4o-mini \
--github-pr juice-shop/juice-shop#2783 \
--post-to http://localhost:3000 \
--project-name juice-shopScan only your working tree changes vs HEAD:
codewopr-scanner scan --path /path/to/repo --model gpt-4o-mini --only-changedYou can invoke the CLI directly using the venv binaries or by adjusting PATH:
# 1) Direct console script
~/codeWOPR/.venv/bin/codewopr-scanner scan --path /path --model gpt-4o-mini
# 2) Module invocation with the venv's Python
~/codeWOPR/.venv/bin/python -m scanner.cli scan --path /path --model gpt-4o-mini
# 3) Temporary PATH prepend
PATH=~/codeWOPR/.venv/bin:$PATH \
codewopr-scanner scan --path /path --model gpt-4o-mini
# 4) Inline env vars (e.g., OpenAI key) with direct binary
OPENAI_API_KEY=YOUR_KEY \
~/codeWOPR/.venv/bin/codewopr-scanner scan --path /path --model gpt-4o-miniAlternate ways to run the scanner (equivalent):
# 1) Console script (shown above)
codewopr-scanner scan --path "~/WebGoat" --model gpt-4o-mini --verbose
# 2) Module invocation (no entrypoint needed)
python -m scanner.cli scan --path "~/WebGoat" --model gpt-4o-mini --verbose
# 3) Direct file execution (ensure PYTHONPATH points to repo root)
PYTHONPATH=~/codeWOPR \
python ~/codeWOPR/scanner/cli.py scan --path "~/WebGoat" --model gpt-4o-mini --verbose- Run management server
codewopr-manager --reloadOpen http://localhost:3000 to use the UI (default port can be overridden with --port or PORT).
Build the container image (includes both the management server and scanner CLI):
docker build -t codewopr:latest .Run it locally (mount a host directory to persist the SQLite database):
mkdir -p ./data
docker run --rm -p 3000:3000 \
-e OPENAI_API_KEY=sk-... \
-e DATABASE_URL=sqlite:////data/codewopr.db \
-v "$(pwd)/data":/data \
--name codewopr \
codewopr:latestThe container defaults to HOST=0.0.0.0 and listens on PORT=3000. You can still run the scanner CLI inside the image if needed:
docker run --rm -it \
-e OPENAI_API_KEY=sk-... \
-v /path/to/project:/scan \
codewopr:latest \
codewopr-scanner scan --path /scan --model gpt-4o-mini- Build and push the image to a registry your cluster can access:
docker build -t ghcr.io/your-org/codewopr:latest .
docker push ghcr.io/your-org/codewopr:latest- Create a secret that holds the OpenAI API key:
kubectl create secret generic openai-api \
--from-literal=api-key=sk-your-key- Apply the sample manifests (PVC + Deployment + Service):
kubectl apply -f deploy/k8s/deployment.yamlThe deployment:
- Mounts a persistent volume at
/dataand pointsDATABASE_URLtosqlite:////data/codewopr.db. - Exposes the web UI on port 80 via a ClusterIP service (change to LoadBalancer/Ingress as needed).
- Pulls its OpenAI API key from the
openai-apisecret.
Customize the image name, resource requests, and replicas for your environment. For production, consider using an external database (e.g., Postgres) by setting DATABASE_URL and removing the volume mount.
- Requires Python 3.11+
- Uses SQLite by default; see
DATABASE_URLin.env - OpenAI key required: set
OPENAI_API_KEY - Some models (e.g.,
gpt-5) enforce a fixed temperature. The scanner automatically omits thetemperatureparam for these to avoid errors.



