Automated compliance scanning for GitHub repos. Scans code for data collection, security issues, and governance gaps - then generates policies, risk assessments, and framework compliance reports. Results feed a central dashboard.
On every push or PR, the scanner produces:
| Output | Location | Description |
|---|---|---|
manifest.yml |
.grc/ |
Structured compliance data - single source of truth |
privacy-policy.md |
docs/policies/ |
GDPR + CCPA policy, populated from detected data collection |
terms-of-service.md |
docs/policies/ |
ToS with auto-detected service descriptions |
vulnerability-disclosure.md |
docs/policies/ |
Responsible disclosure policy |
incident-response-plan.md |
docs/policies/ |
NIST SP 800-61 based IRP |
security.txt |
.well-known/ |
RFC 9116 security contact file |
risk-assessment.md |
.grc/ |
Likelihood x impact matrix with framework mappings |
nist-csf-report.md |
.grc/ |
18 NIST CSF 2.0 subcategories (across all six functions — Govern, Identify, Protect, Detect, Respond, Recover) with SOC 2 TSC 2017 (rev. 2022) + ISO/IEC 27001:2022 cross-mapping |
security-headers-report.md |
.grc/ |
Header status + starter-snippet fixes (CSP typically needs manual review) |
access-controls-report.md |
.grc/ |
Branch protection and auth findings |
Reports (.grc/) are gitignored and regenerated each scan. Policies (docs/policies/, .well-known/) are committed to your PR branch so they ship with your code.
There are two paths. Most forkers want auto-deploy via GitHub Actions — it's the supported production flow. Local-only is for development and iteration.
- Fork this repo (or clone into a repo you control).
- Create a KV namespace once, locally, to get an ID:
npm install npx wrangler login npx wrangler kv namespace create GRC_KV # Copy the id from the output (a 32-char hex string) - Create a Cloudflare API token at dash.cloudflare.com/profile/api-tokens with the "Edit Cloudflare Workers" template.
- Add the secrets and vars to your forked repo (Settings → Secrets and variables → Actions):
- Secrets:
CLOUDFLARE_API_TOKEN— the token from step 3CLOUDFLARE_KV_ID— the KV id from step 2
- Variables (optional):
ORG_NAME— displayed in the dashboard headerGRC_AUDIENCE— OIDC audience the dashboard expects on incoming JWTs (defaults togrc-dashboard). Set it if you want consumer workflows pointed at your fork to pass a matchingaudience:input so tokens minted for your dashboard can't be replayed against another.
- Secrets:
- Push to
main..github/workflows/deploy.ymlruns automatically: it validates both secrets are present, injects the KV id intowrangler.toml, passesORG_NAMEvia--varat deploy time, and runsnpx wrangler deploy.
After the first successful deploy, the dashboard is live at https://grc-dashboard.<your-cf-subdomain>.workers.dev. Point your consuming repos at it via the dashboard_url input on the action (step 2 below).
Miniflare provides an in-memory KV namespace, so you don't need a Cloudflare account or real secrets:
npm install
npx wrangler dev --local
# open http://localhost:8787The committed wrangler.toml carries the literal YOUR_KV_NAMESPACE_ID placeholder — miniflare ignores it. Don't commit a real id into the file; the auto-deploy workflow injects it at build time.
Skipping OIDC locally. The dashboard verifies every POST /api/report against GitHub's OIDC provider. For local iteration, create a .dev.vars file (gitignored) at the repo root:
GRC_AUTH_BYPASS=1
Never set this in production — the bypass is a development-only ergonomics flag.
Authentication in production. The dashboard verifies incoming manifest POSTs against GitHub's OIDC provider — no shared secret to configure. Consumer workflows mint a short-lived JWT that the dashboard validates against GitHub's public JWKS, and the token's repository claim must match the manifest's repo field. Forks that want to scope tokens to their deployment can set GRC_AUDIENCE as a repo variable; it's passed to wrangler deploy --var alongside ORG_NAME. Defaults to grc-dashboard.
Create .github/workflows/grc-scan.yml:
name: GRC Compliance Scan
on:
push:
branches: [main]
pull_request:
permissions:
contents: write # required for auto-committing generated policies to PR branch
pull-requests: write
id-token: write # required to mint the OIDC JWT the dashboard uses for auth
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: YOUR_ORG/GRC-Observability-Dashboard@main
with:
site_url: https://yoursite.com
dashboard_url: https://your-dashboard.workers.dev
env:
GITHUB_TOKEN: ${{ github.token }}If you prefer the action not commit anything, replace contents: write with contents: read. The scan still runs and the dashboard still updates — generated policies just won't auto-commit.
id-token: write is the only required change from the pre-auth workflow shape. The scanner uses GitHub's OIDC provider to mint a short-lived JWT so the dashboard can verify the request came from this specific repository. There are no shared secrets to manage on the consumer side.
If you are pointing the action at a fork of this dashboard that sets a custom GRC_AUDIENCE in its wrangler.toml, also pass the matching audience input so the JWT is minted against that audience:
- uses: YOUR_ORG/GRC-Observability-Dashboard@main
with:
dashboard_url: https://your-fork-dashboard.workers.dev
audience: your-fork-audience # must match GRC_AUDIENCE on the dashboard
site_url: https://yoursite.comWhen pointing at the upstream dashboard, leave audience unset — it defaults to grc-dashboard.
Create .grc/config.yml in each repo:
site_name: Your Site
site_url: https://yoursite.com
owner_name: Your Name
contact_email: you@example.com
log_retention_days: 90
jurisdiction:
- gdpr
- ccpa
# Optional: where to write generated policies in your repo (default: docs/policies)
# output_dir: docs/policies
# Optional: where your site serves each policy (enables dashboard's Check Production URL verification)
# policy_urls:
# privacy_policy: /privacy-policy
# terms_of_service: /legal/terms
# vulnerability_disclosure: /vulnerability-disclosure
# security_txt: /.well-known/security.txt# Regenerated each scan, never commit
.grc/*
!.grc/config.yml
Policies live at docs/policies/ and .well-known/security.txt - those DO get committed (via the action).
The dashboard shows compliance posture across all your repos:
- Org-wide stats (score vs. mapped controls, NIST CSF coverage, vulnerabilities, secrets)
- Per-repo detail with data collection, headers, TLS, deps, access controls, artifacts
- NIST CSF tab with per-function scores and SOC 2 / ISO 27001 cross-references
- AI tab with detected AI systems (provider, SDK, category), risk tier, and data flows
- Branch dropdown to compare compliance across branches
- Search/filter by repo name
- Trend tracking over time (last 500 scans per repo)
- "Check Production" button - hits your live URL on demand, verifies security headers, HTTPS enforcement, and any URLs configured in
policy_urls
| Endpoint | Method | Description |
|---|---|---|
/api/report |
POST | Receive a manifest (YAML or JSON) |
/api/repos |
GET | All repo summaries |
/api/repos/:owner/:name |
GET | Full manifest for a repo |
/api/history/:owner/:name |
GET | Historical scan data |
/api/branches/:owner/:name |
GET | List of branches scanned for a repo |
/api/check-production/:owner/:name |
POST | Re-check live URL (headers, HTTPS, policy URLs) |
/health |
GET | Service health probe |
/badge?repo=:owner/:name |
GET | SVG badge for a repo |
/badge/:owner/:name |
GET | Path-style SVG badge for a repo |
Any scanned repo can expose a public status badge from the deployed worker.
Markdown image:
Linked badge:
[](https://grc-dashboard.jdeftekhari.workers.dev/repo/shipstuff/GRC-Observability-Dashboard)Branch-specific badge:
Badge states:
pass NN%— no critical findings and overall posture is healthywarn NN%— medium-risk posture, missing controls, or high vulnerabilitiesfail NN%— critical vulnerabilities, detected secrets, or very low compliancenot scanned— the dashboard has no manifest for that repo/branch yet
GitHub's GitHub App badge UI is a separate static logo upload. See docs/badges.md for the distinction and setup steps.
Run the scanner locally without the dashboard:
npm run scan -- /path/to/repo --url=https://yoursite.comReports are written to /path/to/repo/.grc/. Policies are written to /path/to/repo/docs/policies/ and /path/to/repo/.well-known/security.txt.
- Forms - HTML forms, input fields, PII classification
- Endpoints - POST/PUT/PATCH route handlers, req.body fields
- Dependencies -
package.jsonagainst 20+ known services (Resend, Stripe, Sentry, Auth0, etc.),npm auditfor CVEs - Cookies - server and client cookie usage
- Tracking - Google Analytics, Mixpanel, PostHog, Hotjar, Facebook Pixel, etc.
- Secrets - API keys, tokens, private keys in source
- Access Controls - GitHub Rulesets API for branch protection details (required reviewers, signed commits, rule types), auth middleware on sensitive routes
- Artifacts - existence of policies, security.txt, IRP at configured
output_dir - Security Headers - CSP, HSTS, X-Frame-Options, X-Content-Type-Options, Referrer-Policy, Permissions-Policy (live URL check)
- TLS - HTTPS enforcement, certificate expiry (live URL check)
- AI Systems - detects AI SDKs (OpenAI, Anthropic, Cohere, Gemini, HuggingFace, Mistral, Groq, LangChain, LlamaIndex, Vercel AI SDK), training libs (TensorFlow, PyTorch), vector DBs (Pinecone, Weaviate, ChromaDB, Qdrant), and outbound API calls. Supports Node (
package.json), Python (requirements.txt,pyproject.toml), and monorepos.
The scanner's Node/JavaScript path is the most mature — forms, endpoints, dependencies, secrets, tracking, and AI SDK detection all fully work on .ts / .tsx / .js / .jsx / .mjs / .cjs trees.
Python support is partial: requirements.txt and pyproject.toml are scanned for AI packages and third-party services, but form/endpoint/secret detection only has basic regex coverage. Flask, Django, and FastAPI idioms aren't specifically recognised yet.
Go, Ruby, Java, Rust, PHP — not meaningfully supported. Files are walked for secret regexes and outbound AI API URL patterns; nothing else. A repo in any of these languages will scan without erroring but the findings list will be sparse compared to a Node repo.
If you're running the scanner against a non-Node repo, expect partial signal and treat missing findings as absence of evidence, not evidence of absence.
Add to .grc/config.yml:
ai:
enabled: true
provider: anthropic # or openaiSet ANTHROPIC_API_KEY or OPENAI_API_KEY as an environment variable or GitHub secret. The scanner works fully without AI.
AI is used for: PII classification of form fields, plain-English risk narratives, PR comment summaries, and gap analysis recommendations.
GRC-Observability-Dashboard/
action.yml # Composite GitHub Action
wrangler.toml # Cloudflare Worker config
dashboard/
worker.ts # Hono API + HTMX UI (Cloudflare Worker)
views/render.ts # Dashboard templates
scanner/
index.ts # Scanner entry point
rules/ # Detection rules
generators/ # Report generators
frameworks/ # NIST CSF 2.0 + EU AI Act + cross-mappings to SOC 2 TSC / ISO 27001:2022 / ISO/IEC 42001:2023 / NIST AI RMF
templates/ # Handlebars policy templates
ai/ # Optional AI enhancement layer
examples/
grc-scan.yml # Example workflow for consuming repos
docs/ # Architecture and reference docs
Your repo/
.github/workflows/grc-scan.yml # ~15 lines
.grc/config.yml # site info + optional policy URLs
docs/policies/ # auto-generated, committed by the action
.well-known/security.txt # auto-generated, committed by the action
| Field | Required | Description |
|---|---|---|
site_name |
yes | Display name used in generated policies |
site_url |
yes | Your site's canonical URL |
owner_name |
yes | Site owner name |
contact_email |
yes | Contact for privacy/legal inquiries |
security_contact |
no | Contact for security reports (defaults to contact_email) |
log_retention_days |
no | Server log retention period (default: 90) |
jurisdiction |
no | gdpr, ccpa, etc. (default: [gdpr, ccpa]) |
output_dir |
no | Where to write policy files (default: docs/policies) |
policy_urls |
no | URLs at which your site serves each policy (enables Check Production URL verification) |
ai.enabled |
no | Opt in to AI enhancements (default: false) |
ai.provider |
no | anthropic or openai (default: anthropic) |
| Setting | Description |
|---|---|
name |
Worker name (used in URL: name.your-account.workers.dev) |
ORG_NAME |
Displayed in dashboard header (optional, set in [vars]) |
KV namespace id |
Your KV storage ID (from npx wrangler kv namespace create GRC_KV) |
| Input | Description | Required |
|---|---|---|
site_url |
Live URL for the Check Production button | No |
dashboard_url |
Dashboard URL to POST manifests to | No |
Dev loop, adding scan rules, adding policy templates, adding frameworks — all in CONTRIBUTING.md.
- GitHub App (zero-config install, no workflow file needed per repo)
- SBOM generation (CycloneDX)
- SAST via Semgrep integration
- Auditor evidence export (PDF/ZIP per framework)