chore(docs): standardize public messaging and claims boundary#85
chore(docs): standardize public messaging and claims boundary#85chrismaz11 wants to merge 11 commits intomasterfrom
Conversation
Co-authored-by: chrismaz11 <24700273+chrismaz11@users.noreply.github.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
| apiKey: string, | ||
| secret = process.env.API_KEY_FINGERPRINT_SECRET || DEV_API_KEY_FINGERPRINT_SECRET | ||
| ): string { | ||
| return createHmac('sha256', secret).update(apiKey).digest('hex').slice(0, 16); |
Check failure
Code scanning / CodeQL
Use of password hash with insufficient computational effort High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 14 hours ago
In general, to fix “password hash with insufficient computational effort,” replace fast hash constructions (like md5, sha1, sha256 / HMAC alone) with a password hashing / key‑derivation function designed to be slow and optionally memory‑hard, such as bcrypt, scrypt, PBKDF2, or Argon2. These functions drastically increase the cost of offline guessing attacks if the derived values are exposed.
For this specific code, the best fix with minimal behavioral change is:
- Keep the idea of a deterministic API‑key fingerprint, but derive it using a KDF that supports configurable iterations, such as
crypto.pbkdf2Syncwith HMAC‑SHA256. - Use the existing secret (
API_KEY_FINGERPRINT_SECRETor the development fallback) as the PBKDF2 salt input, and the API key as the password input. - Configure a modest but non‑trivial iteration count that is acceptable for this use (for example 100,000) and a small output length (e.g., 16 bytes, then hex‑encode and slice to 16 characters to preserve the current external representation size).
- Keep the return type and shape the same (a 16‑character hex string) so that callers (
requireApiKeyScopeandgetApiRateLimitKey) continue to work without further changes.
Concretely, in apps/api/src/security.ts:
- Add
pbkdf2Syncto the existing import fromnode:crypto. - Update
fingerprintApiKeyto usepbkdf2Syncinstead ofcreateHmac. The function will still return a 16‑char hex string, but now derived using PBKDF2 with sufficient iterations.
No other files or call sites need functional changes.
| @@ -1,4 +1,4 @@ | ||
| import { createHmac, generateKeyPairSync } from 'node:crypto'; | ||
| import { createHmac, generateKeyPairSync, pbkdf2Sync } from 'node:crypto'; | ||
|
|
||
| import { getAddress, verifyMessage } from 'ethers'; | ||
| import { FastifyReply, FastifyRequest } from 'fastify'; | ||
| @@ -273,7 +273,10 @@ | ||
| apiKey: string, | ||
| secret = process.env.API_KEY_FINGERPRINT_SECRET || DEV_API_KEY_FINGERPRINT_SECRET | ||
| ): string { | ||
| return createHmac('sha256', secret).update(apiKey).digest('hex').slice(0, 16); | ||
| // Derive a short, deterministic fingerprint using an iterated KDF rather than a fast hash. | ||
| // Keep the external representation (16 hex chars) stable for existing callers. | ||
| const derived = pbkdf2Sync(apiKey, secret, 100_000, 16, 'sha256'); | ||
| return derived.toString('hex').slice(0, 16); | ||
| } | ||
|
|
||
| export function requireApiKeyScope(config: SecurityConfig, requiredScope: AuthScope) { |
|
|
||
| function stripHtml(text: string): string { | ||
| return text | ||
| .replace(/<script[\s\S]*?<\/script>/gi, ' ') |
Check failure
Code scanning / CodeQL
Bad HTML filtering regexp High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 3 days ago
In general, the problem is that the script-removal regex only matches </script> with no trailing whitespace or attributes. To fix this while keeping the behavior the same otherwise, expand the closing-tag part of the regex to allow optional whitespace and attributes before the closing >. We should also keep it case-insensitive and non-greedy, as in the original.
The best minimal change is to adjust only the first .replace in stripHtml:
- From:
/\<script[\s\S]*?<\/script>/gi - To:
/\<script\b[\s\S]*?<\/script\b[^>]*>/gi
Changes:
- Add
\bafterscriptin the opening tag to avoid matching longer words. - Change the closing tag to
</script\b[^>]*>to permit variants such as</script >,</script foo="bar">, and with odd spacing, which browsers accept.
We keep all other replacements intact. No new imports or external libraries are required, and we stay within apps/api/src/services/registryAdapters.ts, modifying only the lines shown.
| @@ -809,7 +809,7 @@ | ||
|
|
||
| function stripHtml(text: string): string { | ||
| return text | ||
| .replace(/<script[\s\S]*?<\/script>/gi, ' ') | ||
| .replace(/<script\b[\s\S]*?<\/script\b[^>]*>/gi, ' ') | ||
| .replace(/<style[\s\S]*?<\/style>/gi, ' ') | ||
| .replace(/<\/?(br|p|tr|td|th|li|div|section|article|h[1-6])[^>]*>/gi, ' ') | ||
| .replace(/<[^>]+>/g, ' ') |
There was a problem hiding this comment.
Pull request overview
This PR broadly rebrands “Deed Shield” surfaces to “TrustSignal” across documentation, packages, demos, and ops artifacts, while also introducing several substantive runtime/security/registry and circuit-proof changes (despite the PR description claiming docs-only scope).
Changes:
- Rebrand identifiers and public-facing messaging across docs, packages, demo UI, and monitoring artifacts.
- Update API/server behavior: revocation pre-handler, API-key fingerprinting, rate-limit configuration, env loading changes, and expanded registry adapter/snapshot logic with new tests and schema/db updates.
- Add/adjust repo guardrails and CI/security workflows (Trivy, dependency review, zizmor) plus CODEOWNERS and gitignore updates.
Reviewed changes
Copilot reviewed 62 out of 66 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| trustsignal_tests.sh | Adds a bash-based API test runner for ingest/verify flows. |
| src/routes/verify.ts | Adds per-route rate limiting config for /v1/verify-bundle. |
| src/api/verify.js | Rebrands legacy demo HTML title/header. |
| sdk/index.ts | Replaces regex trailing-slash removal with a helper function. |
| packages/core/src/receiptSigner.test.ts | Updates tests to use trustsignal verifierId. |
| packages/core/src/receipt.ts | Changes default verifierId to trustsignal. |
| packages/core/package.json | Renames package scope to @trustsignal/core. |
| packages/contracts/package.json | Renames package scope to @trustsignal/contracts. |
| packages/README.md | Updates workspace package naming guidance. |
| package.json | Renames repo/package name to trustsignal. |
| package-lock.json | Updates workspace names and links for renamed packages. |
| ml/model/pycache/common.cpython-314.pyc | Adds a Python bytecode artifact (should be excluded). |
| docs/security-summary.md | Rewrites public security summary opening + audience section. |
| docs/public-private-boundary.md | Updates public/private boundary framing and guardrails text. |
| docs/partner-eval/integration-model.md | Updates wording from DeedShield to TrustSignal surface. |
| docs/ops/monitoring/grafana-dashboard-deedshield-api.json | Rebrands dashboard title. |
| docs/ops/monitoring/alert-rules.yml | Rebrands alert names (expressions remain deedshield-prefixed). |
| docs/ops/monitoring/README.md | Rebrands monitoring README wording. |
| docs/legal/terms-of-service.md | Updates product/module naming in legal terms. |
| docs/legal/privacy-policy.md | Updates naming and scope wording. |
| docs/legal/pilot-agreement.md | Updates pilot program naming. |
| docs/legal/cookie-policy.md | Updates naming for cookie policy. |
| docs/final/14_VANTA_INTEGRATION_USE_CASE.md | Rebrands “module” fields and vertical naming. |
| docs/final/11_NSF_GRANT_WHITEPAPER.md | Removes DeedShield wedge naming from abstract. |
| docs/final/10_INCIDENT_ESCALATION_AND_SLO_BASELINE.md | Rebrands scope statement. |
| docs/final/01_EXECUTIVE_SUMMARY.md | Rebrands title and product-position language. |
| docs/customer/pilot-handbook.md | Rebrands pilot handbook welcome text. |
| docs/compliance/kpmg-evidence-index.md | Adds machine-generated evidence index doc. |
| docs/compliance/kpmg-enterprise-readiness-validation-report.md | Adds machine-generated readiness validation report. |
| docs/compliance/kpmg-enterprise-audit-runbook.md | Adds audit runbook for diligence execution. |
| docs/IT_INSTALLATION_MANUAL.md | Rebrands installation manual and PRIA mapping language. |
| docs/IMPLEMENTATION_PLAN_PASSIVE_INSPECTOR.md | Rebrands watcher plan references + package scope. |
| docs/CANONICAL_MESSAGING.md | Adjusts canonical messaging guidance wording. |
| circuits/non_mem_gadget/src/revocation.rs | Switches revocation prove path to shared proof helper. |
| circuits/non_mem_gadget/src/lib.rs | Replaces MockProver with create/verify proof flow helper. |
| bench/run-bench.ts | Updates benchmark receipt builder verifierId. |
| apps/web/src/contexts/OperatorContext.tsx | Migrates localStorage key from deed-shield to trustsignal. |
| apps/web/src/app/verify-artifact/page.tsx | Replaces page with redirect alias to /verify. |
| apps/web/package.json | Renames package to @trustsignal/web. |
| apps/watcher/src/index.js | Rebrands watcher log strings and notification titles. |
| apps/api/src/services/registryAdapters.ts | Expands registry sources, adds snapshots, new provider types, job metadata, and test hooks. |
| apps/api/src/services/compliance.ts | Updates/duplicates system prompt wording. |
| apps/api/src/server.ts | Rebrands verifierId/service name, refactors revocation auth handling, adjusts per-route rate limit config. |
| apps/api/src/security.ts | Switches API-key fingerprinting to HMAC + adds revocation auth context typing. |
| apps/api/src/security-hardening.test.ts | Adds stronger revocation negative/edge-case coverage. |
| apps/api/src/registryLoader.test.ts | Updates imports to @trustsignal/core. |
| apps/api/src/registry-adapters.test.ts | Expands expected registry source IDs list. |
| apps/api/src/registry-adapters-new-sources.test.ts | Adds new test coverage for new registry sources and snapshot behavior. |
| apps/api/src/env.ts | Switches env loading to dotenv + expands DB URL resolution logic. |
| apps/api/src/db.ts | Adds DB columns for accessType/jobType/snapshot metadata. |
| apps/api/prisma/schema.prisma | Adds Prisma fields for RegistrySource/accessType and RegistryOracleJob metadata. |
| apps/api/package.json | Renames package and updates core dependency scope. |
| apps/api/SETUP.md | Rebrands setup doc and docker container naming. |
| apps/api/.env.example | Adds env vars for new registry sources + snapshot dir. |
| USER_MANUAL.md | Rebrands user manual language. |
| SECURITY_CHECKLIST.md | Rebrands checklist title and references. |
| README.md | Rewrites intro and updates links/auth guidance (now claims bearer-token flow). |
| .gitignore | Adds Python cache/pyc ignore patterns. |
| .github/workflows/zizmor.yml | Adds zizmor advisory workflow audit job. |
| .github/workflows/trivy.yml | Adds Trivy filesystem scanning with SARIF upload. |
| .github/workflows/main.yml | Pins checkout action and adds minimal permissions block. |
| .github/workflows/dependency-review.yml | Adds dependency diff review workflow. |
| .github/workflows/copilotsetupsteps.yml | Pins actions and adds workflow permissions. |
| .github/workflows/ci.yml | Pins actions and adds workflow permissions. |
| .github/CODEOWNERS | Adds CODEOWNERS entries for repo areas. |
Comments suppressed due to low confidence (1)
docs/ops/monitoring/alert-rules.yml:77
- Alert names were rebranded to
TrustSignal*, but the rule expressions and labels still referencedeedshield_*metrics andservice: deedshield-api. If the intent is a full rename, these should be updated together (or keep the legacy alert names) to avoid confusion and broken alerting when metric/service naming diverges.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| API_URL="https://api.trustsignal.dev" | ||
| API_KEY="d4a2bd92be56c54905a99f2b5709e14064e9eaeb99c44aa74898125aedf5028a" | ||
|
|
There was a problem hiding this comment.
The test script hard-codes an API key value in-repo. Even if this is a non-production key, committing it risks accidental reuse and will trigger secret scanners. Prefer reading API_KEY from the environment (or prompting) and fail with a clear message when it’s missing.
| ```bash | ||
| curl -X POST "http://localhost:3001/api/v1/verify" \ | ||
| -H "Content-Type: application/json" \ | ||
| -H "x-api-key: $TRUSTSIGNAL_API_KEY" \ | ||
| -H "Authorization: Bearer $TRUSTSIGNAL_ACCESS_TOKEN" \ | ||
| --data @examples/verification-request.json |
There was a problem hiding this comment.
The README’s curl example uses Authorization: Bearer $TRUSTSIGNAL_ACCESS_TOKEN, but the /api/v1/verify route in apps/api/src/server.ts is guarded by requireApiKeyScope (expects x-api-key). Either update the docs back to x-api-key for this API surface, or implement bearer-token auth in the server so the example matches reality.
| return { | ||
| status: buildMatches(subject, snapshot.candidates).length > 0 ? 'MATCH' : 'NO_MATCH', | ||
| matches: buildMatches(subject, snapshot.candidates), |
There was a problem hiding this comment.
For snapshot-backed sources, buildMatches(subject, snapshot.candidates) is computed twice (once to determine status and again for the returned matches). This repeats scoring/sorting work unnecessarily; compute it once and reuse the result for both status and matches.
| return { | |
| status: buildMatches(subject, snapshot.candidates).length > 0 ? 'MATCH' : 'NO_MATCH', | |
| matches: buildMatches(subject, snapshot.candidates), | |
| const matches = buildMatches(subject, snapshot.candidates); | |
| return { | |
| status: matches.length > 0 ? 'MATCH' : 'NO_MATCH', | |
| matches, |
| function resolveSnapshotRoot(snapshotDir?: string): string { | ||
| return snapshotDir || process.env.REGISTRY_SNAPSHOT_DIR || path.resolve(__dirname, '../..', '.registry-snapshots'); | ||
| } | ||
|
|
||
| function snapshotPath(sourceId: RegistrySourceId, snapshotDir?: string): string { | ||
| return path.join(resolveSnapshotRoot(snapshotDir), `${sourceId}.json`); | ||
| } |
There was a problem hiding this comment.
The default snapshot directory is derived from __dirname. This will resolve to different locations when running via tsx (src) vs node dist (dist), which can lead to snapshots being written under dist/ in production. Prefer a stable default like process.cwd() (still overridable via REGISTRY_SNAPSHOT_DIR) so snapshot storage is consistent across run modes.
| TRUSTSIGNAL LLM SYSTEM PROMPT: Cook County Clerk Recording Requirements | ||
| Your Role | ||
| You are an AI assistant integrated into TrustSignal, a deed verification and title company automation platform. Your primary responsibility is to validate real estate documents against Cook County Clerk's Office recording requirements and identify policy mismatches before submission. | ||
| You are an AI assistant integrated into TrustSignal, an evidence verification and title workflow automation platform. Your primary responsibility is to validate real estate documents against Cook County Clerk's Office recording requirements and identify policy mismatches before submission. |
There was a problem hiding this comment.
The Cook County system prompt now contains two consecutive sentences starting with “You are an AI assistant integrated into TrustSignal...” with slightly different product descriptions. This duplication is likely accidental and can dilute prompt clarity; keep a single canonical version.
| You are an AI assistant integrated into TrustSignal, an evidence verification and title workflow automation platform. Your primary responsibility is to validate real estate documents against Cook County Clerk's Office recording requirements and identify policy mismatches before submission. |
| ## Coverage metrics | ||
|
|
||
| Source: `/Users/christopher/Projects/TSREPO/trustsignal/coverage/coverage-summary.json` | ||
|
|
There was a problem hiding this comment.
This report includes absolute local paths (e.g., /Users/.../coverage/coverage-summary.json). These paths won’t exist for other readers and may leak workstation details; prefer repo-relative paths or describe outputs in a location-agnostic way.
| @@ -1,5 +1,5 @@ | |||
| { | |||
| "name": "deed-shield", | |||
| "name": "trustsignal", | |||
There was a problem hiding this comment.
PR description says this is docs-only with no runtime changes, but this PR includes substantive code-path changes (API auth/rate limiting/registry adapters, Rust proof generation, package renames). Please update the PR title/description/scope or split into smaller PRs so reviewers can assess runtime impact appropriately.
| ## Guardrails | ||
| - Route handlers must call the narrow engine interface and must not import engine internals directly. | ||
| - Public gateway code uses import restrictions and checks (e.g., `npm run check:api-boundary`) to prevent accidental leakage of private engine code. |
There was a problem hiding this comment.
This doc references npm run check:api-boundary, but there doesn’t appear to be any script/command with that name in the repo. Either add the script (and describe where it lives) or adjust the guardrails section to reference an actual enforced check so readers don’t assume a protection exists when it doesn’t.
| - `/Users/christopher/Projects/TSREPO/trustsignal/coverage/coverage-summary.json` | ||
|
|
||
| ## In-repo supporting documents | ||
|
|
||
| - [Enterprise audit runbook](/Users/christopher/Projects/TSREPO/trustsignal/docs/compliance/kpmg-enterprise-audit-runbook.md) | ||
| - [Validation report](/Users/christopher/Projects/TSREPO/trustsignal/docs/compliance/kpmg-enterprise-readiness-validation-report.md) | ||
| - [Security checklist](/Users/christopher/Projects/TSREPO/trustsignal/SECURITY_CHECKLIST.md) |
There was a problem hiding this comment.
This file contains absolute local machine paths (e.g., /Users/...) which are not portable and leak developer workstation details into the repo. Replace with repo-relative links/paths (or remove the path references) so the document is usable for external reviewers.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: e4bffa380f
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| reasons: JSON.parse(record.reasons) as string[], | ||
| riskScore: record.riskScore, | ||
| verifierId: 'deed-shield', | ||
| verifierId: 'trustsignal', |
There was a problem hiding this comment.
Preserve legacy verifierId when hydrating stored receipts
Changing the reconstructed verifierId to trustsignal for every row breaks verification of receipts created before the rebrand (which were signed with deed-shield). verifyStoredReceipt recomputes the unsigned payload hash/signature from this reconstructed object, so historical records will now fail integrity/signature checks even though the stored receipt data is otherwise unchanged.
Useful? React with 👍 / 👎.
|
|
||
| const parsed = new URL(rawPoolerUrl); | ||
| if (!parsed.password) { | ||
| parsed.password = encodeURIComponent(supabasePassword); |
There was a problem hiding this comment.
Assign raw Supabase password instead of encoded string
In the Supabase pooler fallback path, URL.password is being set to encodeURIComponent(supabasePassword), but URL.password already handles percent-encoding. This double-encodes credentials containing reserved characters (for example % becoming %25 again), which produces an invalid DATABASE_URL and can cause authentication failures when connecting to Postgres.
Useful? React with 👍 / 👎.
| </div> | ||
| ); | ||
| export default function VerifyArtifactAliasPage() { | ||
| redirect('/verify'); |
There was a problem hiding this comment.
Keep /verify-artifact mapped to artifact verification UI
This redirect removes the dedicated artifact-verification flow from /verify-artifact and sends users to /verify, which is a different bundle-verification path. Since ArtifactVerifyClient is no longer mounted by any page, existing links/bookmarks for local artifact fingerprint verification now lose that functionality.
Useful? React with 👍 / 👎.
…ients, P1 fixes - Add browser OAuth user registration and login endpoints (/api/v1/auth/register, /api/v1/auth/login, /api/v1/auth/logout) - Add machine client registration with Ed25519 JWK support (POST /api/v1/clients) - Add GET /api/v1/clients endpoint filtered by userEmail for client listing - Add replayStore for webhook delivery deduplication; fix release() to actually delete record on failure - Fix parseCookieHeader to wrap decodeURIComponent in try/catch, preventing URIError crashes on malformed cookies - Fix appendQueryParams to split on '?' instead of using new URL(), avoiding throws on relative paths - Add Prisma migrations for artifact receipts, machine clients, and browser OAuth session storage - Extract auth helpers into auth.ts module Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
| } | ||
|
|
||
| export function hashOpaqueToken(value: string): string { | ||
| return createHash('sha256').update(value).digest('hex'); |
Check failure
Code scanning / CodeQL
Use of password hash with insufficient computational effort High
Copilot Autofix
AI about 10 hours ago
Copilot could not generate an autofix suggestion
Copilot could not generate an autofix suggestion for this alert. Try pushing a new commit or if the problem persists contact support.
| }, async (request, reply) => { | ||
| const parsed = clientRegistrationSchema.safeParse(request.body); | ||
| if (!parsed.success) { | ||
| return reply.code(400).send({ error: 'invalid_client_registration', details: parsed.error.flatten() }); | ||
| } | ||
|
|
||
| if (parsed.data.clientType === 'browser') { | ||
| const session = await loadAuthenticatedBrowserSession(request); | ||
| if (!session) { | ||
| return reply.code(401).send({ error: 'login_required' }); | ||
| } | ||
| if (session.user.disabledAt) { | ||
| reply.header('set-cookie', clearOauthSessionCookie()); | ||
| return reply.code(403).send({ error: 'user_disabled' }); | ||
| } | ||
|
|
||
| const browserClient = await prisma.client.create({ | ||
| data: { | ||
| name: parsed.data.name || 'Browser OAuth client', | ||
| userEmail: parsed.data.userEmail || session.user.email || null, | ||
| clientType: 'browser', | ||
| scopes: (parsed.data.scopes || ['read']).join(' '), | ||
| ownerUserId: session.user.id, | ||
| createdBy: session.user.id, | ||
| redirectUris: { | ||
| create: (parsed.data.redirectUris || []).map((redirectUri: string) => ({ | ||
| redirectUri | ||
| })) | ||
| } | ||
| }, | ||
| include: { | ||
| redirectUris: true | ||
| } | ||
| }); | ||
|
|
||
| return reply.code(201).send({ | ||
| client_id: browserClient.id, | ||
| client_type: browserClient.clientType, | ||
| scope: browserClient.scopes, | ||
| owner_user_id: session.user.id, | ||
| redirect_uris: browserClient.redirectUris.map((entry: { redirectUri: string }) => entry.redirectUri), | ||
| authorization_endpoint: buildExternalUrl(request, '/api/v1/oauth/authorize') || '/api/v1/oauth/authorize', | ||
| token_endpoint: buildExternalUrl(request, '/api/v1/token') || '/api/v1/token', | ||
| token_endpoint_auth_method: 'none', | ||
| grant_types: ['authorization_code'], | ||
| response_types: ['code'], | ||
| pkce_required: true | ||
| }); | ||
| } | ||
|
|
||
| const client = await prisma.client.create({ | ||
| data: { | ||
| name: parsed.data.name, | ||
| userEmail: parsed.data.userEmail, | ||
| clientType: parsed.data.clientType, | ||
| scopes: (parsed.data.scopes || ['verify', 'read']).join(' '), | ||
| jwks: parsed.data.jwks ? JSON.parse(JSON.stringify(parsed.data.jwks)) : undefined, | ||
| jwksUrl: parsed.data.jwksUrl, | ||
| createdBy: request.ip | ||
| } | ||
| }); | ||
|
|
||
| return reply.code(201).send({ | ||
| client_id: client.id, | ||
| client_type: client.clientType, | ||
| plan: client.plan, | ||
| usage_limit: client.usageLimit, | ||
| scope: client.scopes, | ||
| token_endpoint: buildExternalUrl(request, '/api/v1/token') || '/api/v1/token', | ||
| token_endpoint_auth_method: 'private_key_jwt', | ||
| grant_types: ['client_credentials'], | ||
| legacy_api_key_support: true | ||
| }); | ||
| }); |
Check failure
Code scanning / CodeQL
Missing rate limiting High
| return reply.code(401).send({ error: 'login_required' }); | ||
| } | ||
| if (session.user.disabledAt) { | ||
| reply.header('set-cookie', clearOauthSessionCookie()); |
Check failure
Code scanning / CodeQL
Clear text storage of sensitive information High
Copilot Autofix
AI about 10 hours ago
Copilot could not generate an autofix suggestion
Copilot could not generate an autofix suggestion for this alert. Try pushing a new commit or if the problem persists contact support.
| }, async (request, reply) => { | ||
| const parsed = oauthLoginSchema.safeParse(request.body); | ||
| if (!parsed.success) { | ||
| oauthLoginTotal.inc({ outcome: 'failure' }); | ||
| return reply.code(400).send({ error: 'invalid_oauth_login', details: parsed.error.flatten() }); | ||
| } | ||
|
|
||
| const email = parsed.data.email.trim().toLowerCase(); | ||
| const user = await userAccountStore.findUnique({ | ||
| where: { email } | ||
| }); | ||
| if (!user || !(await verifyPasswordHash(parsed.data.password, user.passwordHash, user.passwordSalt))) { | ||
| oauthLoginTotal.inc({ outcome: 'failure' }); | ||
| return reply.code(401).send({ error: 'invalid_credentials' }); | ||
| } | ||
| if (user.disabledAt) { | ||
| oauthLoginTotal.inc({ outcome: 'failure' }); | ||
| reply.header('set-cookie', clearOauthSessionCookie()); | ||
| return reply.code(403).send({ error: 'user_disabled' }); | ||
| } | ||
|
|
||
| const session = await issueBrowserSession(request, user.id); | ||
| await userAccountStore.update({ | ||
| where: { id: user.id }, | ||
| data: { lastLoginAt: new Date() } | ||
| }); | ||
| reply.header('set-cookie', buildOauthSetCookie(session.cookieValue)); | ||
| oauthLoginTotal.inc({ outcome: 'success' }); | ||
|
|
||
| const returnTo = resolveReturnTo(request, parsed.data.return_to); | ||
| if (parsed.data.return_to) { | ||
| return reply.redirect(returnTo, 303); | ||
| } | ||
|
|
||
| return reply.send({ | ||
| status: 'authenticated', | ||
| user_id: user.id, | ||
| email: user.email, | ||
| display_name: user.displayName | ||
| }); | ||
| }); |
Check failure
Code scanning / CodeQL
Missing rate limiting High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 14 hours ago
In general, to fix missing rate limiting on an authorization-related route in a Fastify app, you configure the @fastify/rate-limit plugin globally (already present in this project) and add per-route config.rateLimit settings so that expensive or security-sensitive handlers cannot be abused via high request volumes. Here, we should add a rateLimit configuration to the /api/v1/auth/logout route, similar in spirit to the existing login rate limit on /api/v1/auth/login, but with appropriately relaxed limits since logout is cheaper.
Concretely, in apps/api/src/server.ts, locate the app.post('/api/v1/auth/logout', async (request, reply) => { ... }); definition around lines 1693–1709. Change this route definition to use the Fastify options object form with a config property that specifies a rateLimit block, analogous to the login handler above. For example, we can allow a higher volume of logouts (e.g., max: 60 per minute per IP) to avoid impacting normal users while still mitigating abuse. The plugin import already exists (import rateLimit from '@fastify/rate-limit';), and no new helper methods are needed; we only add the route-level configuration object between the path string and the async handler. No other files or lines need to be modified.
| @@ -1690,7 +1690,15 @@ | ||
| }); | ||
| }); | ||
|
|
||
| app.post('/api/v1/auth/logout', async (request, reply) => { | ||
| app.post('/api/v1/auth/logout', { | ||
| config: { | ||
| rateLimit: { | ||
| max: 60, | ||
| timeWindow: '1 minute', | ||
| keyGenerator: (request) => request.ip | ||
| } | ||
| } | ||
| }, async (request, reply) => { | ||
| const session = await loadAuthenticatedBrowserSession(request); | ||
| if (session) { | ||
| await browserSessionStore.updateMany({ |
| } | ||
| if (user.disabledAt) { | ||
| oauthLoginTotal.inc({ outcome: 'failure' }); | ||
| reply.header('set-cookie', clearOauthSessionCookie()); |
Check failure
Code scanning / CodeQL
Clear text storage of sensitive information High
Copilot Autofix
AI about 10 hours ago
Copilot could not generate an autofix suggestion
Copilot could not generate an autofix suggestion for this alert. Try pushing a new commit or if the problem persists contact support.
|
|
||
| if (session.user.disabledAt) { | ||
| oauthAuthorizationCodeTotal.inc({ outcome: 'failure' }); | ||
| reply.header('set-cookie', clearOauthSessionCookie()); |
Check failure
Code scanning / CodeQL
Clear text storage of sensitive information High
Copilot Autofix
AI about 10 hours ago
Copilot could not generate an autofix suggestion
Copilot could not generate an autofix suggestion for this alert. Try pushing a new commit or if the problem persists contact support.
| app.post('/api/v1/oauth/authorize/consent', async (request, reply) => { | ||
| const parsed = oauthAuthorizeDecisionSchema.safeParse(request.body); | ||
| if (!parsed.success) { | ||
| oauthAuthorizationCodeTotal.inc({ outcome: 'failure' }); | ||
| return reply.code(400).send({ error: 'invalid_oauth_authorize_decision', details: parsed.error.flatten() }); | ||
| } | ||
|
|
||
| const session = await loadAuthenticatedBrowserSession(request); | ||
| if (!session) { | ||
| oauthAuthorizationCodeTotal.inc({ outcome: 'failure' }); | ||
| const authorizationRequest = await oauthAuthorizationRequestStore.findUnique({ | ||
| where: { id: parsed.data.request_id } | ||
| }); | ||
| if (!authorizationRequest || authorizationRequest.consumedAt || authorizationRequest.expiresAt <= new Date()) { | ||
| return reply.code(401).send({ error: 'login_required' }); | ||
| } | ||
|
|
||
| return reply.redirect( | ||
| `/api/v1/auth/login?return_to=${encodeURIComponent( | ||
| buildAuthorizationReturnTo({ | ||
| clientId: authorizationRequest.clientId, | ||
| redirectUri: authorizationRequest.redirectUri, | ||
| scope: authorizationRequest.scope, | ||
| state: authorizationRequest.state, | ||
| codeChallenge: authorizationRequest.codeChallenge, | ||
| codeChallengeMethod: authorizationRequest.codeChallengeMethod | ||
| }) | ||
| )}`, | ||
| 303 | ||
| ); | ||
| } | ||
|
|
||
| if (session.user.disabledAt) { | ||
| oauthAuthorizationCodeTotal.inc({ outcome: 'failure' }); | ||
| reply.header('set-cookie', clearOauthSessionCookie()); | ||
| return reply.code(403).send({ error: 'user_disabled' }); | ||
| } | ||
|
|
||
| const authorizationRequest = await oauthAuthorizationRequestStore.findUnique({ | ||
| where: { id: parsed.data.request_id }, | ||
| include: { | ||
| client: { | ||
| include: { | ||
| redirectUris: true | ||
| } | ||
| }, | ||
| user: true | ||
| } | ||
| }); | ||
| if ( | ||
| !authorizationRequest || | ||
| authorizationRequest.consumedAt || | ||
| authorizationRequest.expiresAt <= new Date() || | ||
| authorizationRequest.client.revokedAt || | ||
| authorizationRequest.client.clientType !== 'browser' || | ||
| authorizationRequest.userId !== session.user.id | ||
| ) { | ||
| oauthAuthorizationCodeTotal.inc({ outcome: 'failure' }); | ||
| return reply.code(400).send({ error: 'invalid_authorization_request' }); | ||
| } | ||
|
|
||
| const consumed = await oauthAuthorizationRequestStore.updateMany({ | ||
| where: { | ||
| id: authorizationRequest.id, | ||
| consumedAt: null | ||
| }, | ||
| data: { | ||
| consumedAt: new Date() | ||
| } | ||
| }); | ||
| if (consumed.count === 0) { | ||
| oauthAuthorizationCodeTotal.inc({ outcome: 'failure' }); | ||
| return reply.code(409).send({ error: 'authorization_request_consumed' }); | ||
| } | ||
|
|
||
| if (parsed.data.decision === 'deny') { | ||
| oauthAuthorizationCodeTotal.inc({ outcome: 'failure' }); | ||
| return redirectWithOauthResult(reply, authorizationRequest.redirectUri, { | ||
| error: 'access_denied', | ||
| state: authorizationRequest.state || undefined | ||
| }); | ||
| } | ||
|
|
||
| const existingConsentGrant = await oauthConsentGrantStore.findUnique({ | ||
| where: { | ||
| clientId_userId: { | ||
| clientId: authorizationRequest.clientId, | ||
| userId: session.user.id | ||
| } | ||
| } | ||
| }); | ||
| const mergedConsentScopes = Array.from( | ||
| new Set(`${existingConsentGrant?.grantedScopes || ''} ${authorizationRequest.scope}`.trim().split(/\s+/).filter(Boolean)) | ||
| ).join(' '); | ||
|
|
||
| await oauthConsentGrantStore.upsert({ | ||
| where: { | ||
| clientId_userId: { | ||
| clientId: authorizationRequest.clientId, | ||
| userId: session.user.id | ||
| } | ||
| }, | ||
| update: { | ||
| grantedScopes: mergedConsentScopes, | ||
| revokedAt: null | ||
| }, | ||
| create: { | ||
| clientId: authorizationRequest.clientId, | ||
| userId: session.user.id, | ||
| grantedScopes: authorizationRequest.scope | ||
| } | ||
| }); | ||
|
|
||
| const code = await issueAuthorizationCode({ | ||
| clientId: authorizationRequest.clientId, | ||
| userId: session.user.id, | ||
| redirectUri: authorizationRequest.redirectUri, | ||
| scope: authorizationRequest.scope, | ||
| state: authorizationRequest.state || undefined, | ||
| codeChallenge: authorizationRequest.codeChallenge | ||
| }); | ||
| oauthAuthorizationCodeTotal.inc({ outcome: 'success' }); | ||
| return redirectWithOauthResult(reply, authorizationRequest.redirectUri, { | ||
| code, | ||
| state: authorizationRequest.state || undefined | ||
| }); | ||
| }); |
Check failure
Code scanning / CodeQL
Missing rate limiting High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 14 hours ago
In general, the problem is fixed by applying appropriate rate limiting to HTTP handlers that perform authorization or other expensive operations, so that a single client cannot overwhelm the service with repeated requests. This project already uses Fastify’s @fastify/rate-limit plugin and defines a perApiKeyRateLimit configuration that is applied to other sensitive routes; the best fix is to apply the same pattern to the OAuth consent POST route.
Concretely, in apps/api/src/server.ts, locate the app.post('/api/v1/oauth/authorize/consent', ...) definition around line 1819. Currently it is declared with only the URL and the handler function. We should change it to use the object‑form route options, adding config: { rateLimit: perApiKeyRateLimit } (or another appropriate limiter that is already defined in this file) so that Fastify’s rate‑limit plugin will enforce limits on this endpoint. The project already imports and uses rateLimit and perApiKeyRateLimit elsewhere (e.g., for /api/v1/clients/:clientId/keys), so no new imports or helper definitions are required—just adding the config property to this route’s options.
This keeps existing functionality and behavior intact: the handler logic remains unchanged; we only add metadata for Fastify’s route options. All listed alert variants (1–1e) are addressed because they refer to calls within this same handler, and rate limiting at the route level protects all of them.
| @@ -1816,7 +1816,9 @@ | ||
| ); | ||
| }); | ||
|
|
||
| app.post('/api/v1/oauth/authorize/consent', async (request, reply) => { | ||
| app.post('/api/v1/oauth/authorize/consent', { | ||
| config: { rateLimit: perApiKeyRateLimit } | ||
| }, async (request, reply) => { | ||
| const parsed = oauthAuthorizeDecisionSchema.safeParse(request.body); | ||
| if (!parsed.success) { | ||
| oauthAuthorizationCodeTotal.inc({ outcome: 'failure' }); |
|
|
||
| if (session.user.disabledAt) { | ||
| oauthAuthorizationCodeTotal.inc({ outcome: 'failure' }); | ||
| reply.header('set-cookie', clearOauthSessionCookie()); |
Check failure
Code scanning / CodeQL
Clear text storage of sensitive information High
Copilot Autofix
AI about 10 hours ago
Copilot could not generate an autofix suggestion
Copilot could not generate an autofix suggestion for this alert. Try pushing a new commit or if the problem persists contact support.
| }, async (request, reply) => { | ||
| const parsed = tokenRequestSchema.safeParse(request.body); | ||
| if (!parsed.success) { | ||
| return reply.code(400).send({ error: 'invalid_token_request', details: parsed.error.flatten() }); | ||
| } | ||
|
|
||
| if (parsed.data.grant_type === 'authorization_code') { | ||
| const authCodeRequest = parsed.data; | ||
| const browserClient = await findOAuthClientWithRedirects(authCodeRequest.client_id); | ||
| if (!browserClient || browserClient.revokedAt || browserClient.clientType !== 'browser') { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'invalid_client' }); | ||
| return reply.code(403).send({ error: 'invalid_client' }); | ||
| } | ||
| const redirectAllowed = browserClient.redirectUris.some((entry: { redirectUri: string }) => entry.redirectUri === authCodeRequest.redirect_uri); | ||
| if (!redirectAllowed) { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'invalid_redirect_uri' }); | ||
| return reply.code(400).send({ error: 'invalid_redirect_uri' }); | ||
| } | ||
|
|
||
| const codeRecord = await oauthAuthorizationCodeStore.findUnique({ | ||
| where: { | ||
| codeHash: hashOpaqueToken(parsed.data.code) | ||
| } | ||
| }); | ||
| if (!codeRecord || codeRecord.clientId !== browserClient.id) { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'invalid_grant' }); | ||
| return reply.code(400).send({ error: 'invalid_grant' }); | ||
| } | ||
| if (codeRecord.redirectUri !== authCodeRequest.redirect_uri) { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'invalid_redirect_uri' }); | ||
| return reply.code(400).send({ error: 'invalid_redirect_uri' }); | ||
| } | ||
| if (codeRecord.usedAt) { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'authorization_code_reused' }); | ||
| return reply.code(409).send({ error: 'authorization_code_reused' }); | ||
| } | ||
| if (codeRecord.expiresAt <= new Date()) { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'authorization_code_expired' }); | ||
| return reply.code(400).send({ error: 'authorization_code_expired' }); | ||
| } | ||
| if (codeRecord.codeChallengeMethod !== 'S256') { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'invalid_code_challenge_method' }); | ||
| return reply.code(400).send({ error: 'invalid_code_challenge_method' }); | ||
| } | ||
| if (!verifyPkceCodeVerifier(authCodeRequest.code_verifier, codeRecord.codeChallenge)) { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'invalid_code_verifier' }); | ||
| return reply.code(400).send({ error: 'invalid_code_verifier' }); | ||
| } | ||
|
|
||
| const user = await userAccountStore.findUnique({ | ||
| where: { id: codeRecord.userId }, | ||
| select: { | ||
| id: true, | ||
| disabledAt: true | ||
| } | ||
| }); | ||
| if (!user || user.disabledAt) { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'user_disabled' }); | ||
| return reply.code(403).send({ error: 'user_disabled' }); | ||
| } | ||
|
|
||
| const consumeResult = await oauthAuthorizationCodeStore.updateMany({ | ||
| where: { | ||
| id: codeRecord.id, | ||
| usedAt: null | ||
| }, | ||
| data: { | ||
| usedAt: new Date() | ||
| } | ||
| }); | ||
| if (consumeResult.count === 0) { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'authorization_code_reused' }); | ||
| return reply.code(409).send({ error: 'authorization_code_reused' }); | ||
| } | ||
|
|
||
| let issued; | ||
| try { | ||
| issued = await issueAccessToken({ | ||
| client: { | ||
| id: browserClient.id, | ||
| clientType: browserClient.clientType, | ||
| scopes: browserClient.scopes, | ||
| plan: 'BROWSER', | ||
| usageLimit: 0, | ||
| usageCount: 0, | ||
| revokedAt: browserClient.revokedAt, | ||
| jwks: null, | ||
| jwksUrl: null | ||
| }, | ||
| requestedScope: codeRecord.scope, | ||
| accessTokenConfig: securityConfig.accessTokens, | ||
| subject: user.id, | ||
| additionalClaims: { | ||
| azp: browserClient.id, | ||
| client_id: browserClient.id, | ||
| actor_type: 'user', | ||
| user_id: user.id, | ||
| grant_type: 'authorization_code' | ||
| } | ||
| }); | ||
| } catch (error) { | ||
| if (error instanceof Error && error.message === 'invalid_scope') { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'invalid_scope' }); | ||
| return reply.code(400).send({ error: 'invalid_scope' }); | ||
| } | ||
| throw error; | ||
| } | ||
|
|
||
| await prisma.client.update({ | ||
| where: { id: browserClient.id }, | ||
| data: { | ||
| lastUsedAt: new Date() | ||
| } | ||
| }); | ||
| tokenIssuanceTotal.inc({ outcome: 'success' }); | ||
| return reply.send({ | ||
| access_token: issued.accessToken, | ||
| token_type: 'Bearer', | ||
| expires_in: issued.expiresIn, | ||
| scope: issued.scope, | ||
| client_id: browserClient.id, | ||
| user_id: user.id | ||
| }); | ||
| } | ||
|
|
||
| let assertionPayload: Record<string, unknown> | null = null; | ||
| try { | ||
| const [, payloadSegment] = parsed.data.client_assertion.split('.'); | ||
| if (!payloadSegment) { | ||
| return reply.code(400).send({ error: 'invalid_client_assertion' }); | ||
| } | ||
| assertionPayload = JSON.parse(Buffer.from(payloadSegment, 'base64url').toString('utf8')) as Record<string, unknown>; | ||
| } catch { | ||
| return reply.code(400).send({ error: 'invalid_client_assertion' }); | ||
| } | ||
|
|
||
| const clientId = typeof assertionPayload?.iss === 'string' ? assertionPayload.iss.trim() : ''; | ||
| if (!clientId) { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'client_assertion_missing_iss' }); | ||
| return reply.code(400).send({ error: 'client_assertion_missing_iss' }); | ||
| } | ||
|
|
||
| const client = await prisma.client.findUnique({ where: { id: clientId } }); | ||
| if (!client || client.revokedAt) { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'invalid_client' }); | ||
| return reply.code(403).send({ error: 'invalid_client' }); | ||
| } | ||
|
|
||
| if (client.usageCount >= client.usageLimit) { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'usage_limit_exceeded' }); | ||
| return reply.code(429).send({ | ||
| error: 'usage_limit_exceeded', | ||
| usageLimit: client.usageLimit, | ||
| usageCount: client.usageCount | ||
| }); | ||
| } | ||
|
|
||
| let verifiedAssertion; | ||
| try { | ||
| verifiedAssertion = await verifyClientAssertion({ | ||
| client, | ||
| clientAssertion: parsed.data.client_assertion, | ||
| tokenAudience: securityConfig.accessTokens.tokenEndpointAudience, | ||
| requestUrl: buildExternalUrl(request, '/api/v1/token') | ||
| }); | ||
| } catch { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'invalid_client_assertion' }); | ||
| return reply.code(403).send({ error: 'invalid_client_assertion' }); | ||
| } | ||
|
|
||
| const assertionJti = typeof verifiedAssertion.payload.jti === 'string' ? verifiedAssertion.payload.jti.trim() : ''; | ||
| if (!assertionJti) { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'client_assertion_missing_jti' }); | ||
| return reply.code(400).send({ error: 'client_assertion_missing_jti' }); | ||
| } | ||
|
|
||
| const assertionExpiresAt = | ||
| typeof verifiedAssertion.payload.exp === 'number' | ||
| ? new Date(verifiedAssertion.payload.exp * 1000) | ||
| : new Date(Date.now() + 5 * 60 * 1000); | ||
|
|
||
| let assertionReserved = false; | ||
| try { | ||
| assertionReserved = await assertionReplayStore.reserve({ | ||
| clientId: client.id, | ||
| jti: assertionJti, | ||
| expiresAt: assertionExpiresAt | ||
| }); | ||
| } catch (error) { | ||
| request.log.error( | ||
| { | ||
| clientId: client.id, | ||
| error: error instanceof Error ? error.message : 'unknown_error' | ||
| }, | ||
| 'assertion replay store unavailable' | ||
| ); | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'assertion_replay_store_unavailable' }); | ||
| return reply.code(503).send({ error: 'assertion_replay_store_unavailable' }); | ||
| } | ||
|
|
||
| if (!assertionReserved) { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'client_assertion_replayed' }); | ||
| tokenReplayRejectionsTotal.inc(); | ||
| return reply.code(409).send({ error: 'client_assertion_replayed' }); | ||
| } | ||
|
|
||
| let issued; | ||
| try { | ||
| issued = await issueAccessToken({ | ||
| client, | ||
| requestedScope: parsed.data.scope, | ||
| accessTokenConfig: securityConfig.accessTokens | ||
| }); | ||
| } catch (error) { | ||
| if (error instanceof Error && error.message === 'invalid_scope') { | ||
| tokenIssuanceTotal.inc({ outcome: 'failure' }); | ||
| tokenIssuanceFailuresTotal.inc({ reason: 'invalid_scope' }); | ||
| return reply.code(400).send({ error: 'invalid_scope' }); | ||
| } | ||
| throw error; | ||
| } | ||
|
|
||
| tokenIssuanceTotal.inc({ outcome: 'success' }); | ||
|
|
||
| return reply.send({ | ||
| access_token: issued.accessToken, | ||
| token_type: 'Bearer', | ||
| expires_in: issued.expiresIn, | ||
| scope: issued.scope, | ||
| client_id: client.id | ||
| }); | ||
| }); |
Check failure
Code scanning / CodeQL
Missing rate limiting High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 14 hours ago
In general, to fix missing rate limiting on a security‑sensitive route in a Fastify app, you ensure that: (1) the @fastify/rate-limit plugin is registered on the Fastify instance; and (2) the route is configured with an appropriate config.rateLimit or included within a rate‑limited scope. For consistency and least surprise, it’s best to reuse an existing, well‑vetted rate‑limit policy (such as perApiKeyRateLimit) that is already used for other OAuth‑related endpoints in this file.
For this specific code, the /api/v1/token route already has a custom, per‑IP rate limit configuration. Other OAuth endpoints nearby (/api/v1/introspect, /api/v1/clients/:clientId/revoke) use config: { rateLimit: perApiKeyRateLimit }, which is likely the intended policy for authenticated API clients. To fix the CodeQL finding and align with existing security patterns without changing semantics elsewhere, we should replace the ad‑hoc rateLimit configuration on /api/v1/token with perApiKeyRateLimit. This keeps behavior coherent and guarantees that the handler is clearly rate‑limited in the same way as related authorization endpoints.
Concretely:
- In
apps/api/src/server.ts, locate theapp.post('/api/v1/token', { config: { rateLimit: { ... } } }, ...)definition around line 2133. - Replace the inner
rateLimitobject (max/timeWindow/keyGenerator) withperApiKeyRateLimit, matching how/api/v1/introspectand/api/v1/clients/:clientId/revokeare configured. - No new imports or helpers are needed because
perApiKeyRateLimitis already in scope and used in this file.
| @@ -2132,11 +2132,7 @@ | ||
|
|
||
| app.post('/api/v1/token', { | ||
| config: { | ||
| rateLimit: { | ||
| max: 30, | ||
| timeWindow: '1 minute', | ||
| keyGenerator: (request) => request.ip | ||
| } | ||
| rateLimit: perApiKeyRateLimit | ||
| } | ||
| }, async (request, reply) => { | ||
| const parsed = tokenRequestSchema.safeParse(request.body); |
| }, async (request, reply) => { | ||
| const parsed = tokenIntrospectionSchema.safeParse(request.body); | ||
| if (!parsed.success) { | ||
| return reply.code(400).send({ error: 'invalid_token_introspection_request', details: parsed.error.flatten() }); | ||
| } | ||
|
|
||
| try { | ||
| const payload = await verifyAccessTokenPayload(parsed.data.token, securityConfig.accessTokens); | ||
| const subject = typeof payload.sub === 'string' ? payload.sub : ''; | ||
| const clientId = | ||
| typeof payload.azp === 'string' | ||
| ? payload.azp | ||
| : typeof payload.client_id === 'string' | ||
| ? payload.client_id | ||
| : payload.actor_type === 'user' | ||
| ? '' | ||
| : subject; | ||
| const userId = payload.actor_type === 'user' ? subject : typeof payload.user_id === 'string' ? payload.user_id : undefined; | ||
| if (!subject || !clientId) { | ||
| return reply.send({ active: false }); | ||
| } | ||
|
|
||
| if ( | ||
| request.authContext?.principalType === 'access_token' && | ||
| request.authContext.clientId && | ||
| request.authContext.clientId !== clientId | ||
| ) { | ||
| return reply.code(403).send({ error: 'client_mismatch' }); | ||
| } | ||
|
|
||
| const client = await prisma.client.findUnique({ | ||
| where: { id: clientId }, | ||
| select: { | ||
| revokedAt: true | ||
| } | ||
| }); | ||
| let active = Boolean(client && !client.revokedAt); | ||
| if (active && userId) { | ||
| const user = await userAccountStore.findUnique({ | ||
| where: { id: userId }, | ||
| select: { | ||
| disabledAt: true | ||
| } | ||
| }); | ||
| active = Boolean(user && !user.disabledAt); | ||
| } | ||
|
|
||
| return reply.send({ | ||
| active, | ||
| client_id: clientId, | ||
| user_id: userId, | ||
| scope: typeof payload.scope === 'string' ? payload.scope : '', | ||
| grant_type: typeof payload.grant_type === 'string' ? payload.grant_type : undefined, | ||
| actor_type: typeof payload.actor_type === 'string' ? payload.actor_type : undefined, | ||
| token_type: 'Bearer', | ||
| iss: typeof payload.iss === 'string' ? payload.iss : securityConfig.accessTokens.issuer, | ||
| aud: payload.aud, | ||
| exp: typeof payload.exp === 'number' ? payload.exp : undefined, | ||
| iat: typeof payload.iat === 'number' ? payload.iat : undefined, | ||
| sub: subject | ||
| }); | ||
| } catch { | ||
| return reply.send({ active: false }); | ||
| } | ||
| }); |
Check failure
Code scanning / CodeQL
Missing rate limiting High
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0ef74a34bc
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| app.get('/api/v1/clients', { | ||
| config: { | ||
| rateLimit: { | ||
| max: 30, | ||
| timeWindow: '1 minute', | ||
| keyGenerator: (request) => request.ip | ||
| } | ||
| } |
There was a problem hiding this comment.
Require auth on client lookup endpoint
This route is publicly reachable and returns client IDs/scopes for any userEmail query, so an unauthenticated caller can enumerate another tenant’s registered clients just by knowing or guessing an email address. Given that other client-management routes enforce API-key ownership checks, leaving GET /api/v1/clients unauthenticated creates an avoidable data-exposure path and weakens least-privilege controls.
Useful? React with 👍 / 👎.
| # Base URL: update API_URL to match your environment | ||
| # ============================================================= | ||
|
|
||
| API_URL="https://api.trustsignal.dev" |
There was a problem hiding this comment.
Remove committed API key from test script
A concrete API key value is hardcoded in-repo, which risks credential leakage and accidental unauthorized use if the token is valid in any environment. Even if intended for testing, committing a real-looking secret violates secret-handling controls and should be replaced with an environment variable or placeholder so no sensitive token is stored in source history.
Useful? React with 👍 / 👎.
| requestedScope: codeRecord.scope, | ||
| accessTokenConfig: securityConfig.accessTokens, | ||
| subject: user.id, |
There was a problem hiding this comment.
Provide access-token config before issuing tokens
The token endpoint now passes securityConfig.accessTokens into issueAccessToken, but buildSecurityConfig (in apps/api/src/security.ts) does not construct an accessTokens field, so this value is undefined. As soon as a valid OAuth flow reaches token issuance, issueAccessToken dereferences input.accessTokenConfig.current and fails, turning successful authorization-code/client-credentials exchanges into server errors.
Useful? React with 👍 / 👎.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 5734649049
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| read: [requireApiKeyScope(securityConfig, 'read'), requireActiveClient], | ||
| verify: [requireApiKeyScope(securityConfig, 'verify'), requireActiveClient], | ||
| anchor: [requireApiKeyScope(securityConfig, 'anchor'), requireActiveClient], | ||
| revoke: [requireApiKeyScope(securityConfig, 'revoke'), requireActiveClient] |
There was a problem hiding this comment.
Accept bearer tokens in protected-route auth chain
The new OAuth flow issues Bearer access tokens, but all protected endpoints are still guarded by authPreHandlers that call requireApiKeyScope, which only authenticates x-api-key and never inspects Authorization headers (see apps/api/src/security.ts). In practice, clients can successfully mint tokens from /api/v1/token but cannot use them on routes like /api/v1/verify or /api/v1/receipts, so the new token-based auth path is nonfunctional until bearer-token validation is wired into these prehandlers.
Useful? React with 👍 / 👎.
| } | ||
|
|
||
| if (client.jwksUrl) { | ||
| return createRemoteJWKSet(new URL(client.jwksUrl)); |
There was a problem hiding this comment.
Constrain remote JWKS URLs to prevent SSRF
Client assertions are verified with createRemoteJWKSet(new URL(client.jwksUrl)), which causes server-side HTTP fetches to a URL controlled by client registration data. Since this commit also allows self-service client registration with jwksUrl, an attacker can register internal/private endpoints as JWKS and force outbound requests during token exchange, creating an SSRF path. Restrict jwksUrl to trusted HTTPS domains (or require uploaded JWKS) before using remote key sets.
Useful? React with 👍 / 👎.
| if (requestedScopes.size === 0) { | ||
| return Array.from(allowedScopes).join(' '); |
There was a problem hiding this comment.
Reject unknown requested scopes instead of over-granting
resolveGrantedScopes currently treats any parsed-empty requested scope set as equivalent to “no scope requested” and grants all allowed scopes. Because parseScopeSet drops unsupported scope tokens, requests like scope=foo can unintentionally receive full default scopes instead of invalid_scope, which over-grants permissions on malformed/typoed OAuth requests and breaks least-privilege behavior.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 67 out of 71 changed files in this pull request and generated 14 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| Source: `/Users/christopher/Projects/TSREPO/trustsignal/coverage/coverage-summary.json` | ||
|
|
||
| | Metric | Result | | ||
| | --- | --- | | ||
| | Statements | `99.34%` | | ||
| | Lines | `99.34%` | | ||
| | Functions | `100%` | | ||
| | Branches | `93.33%` | | ||
|
|
||
| ## Security scan results | ||
|
|
||
| ### Secret scanning | ||
|
|
||
| - `bash scripts/history-secret-scan.sh`: PASS | ||
| - `gitleaks git . --redact --no-banner`: PASS | ||
| - Local gitleaks output: `163 commits scanned`, `no leaks found` | ||
|
|
||
| ### Dependency vulnerability scanning | ||
|
|
||
| - `npm audit --omit=dev --audit-level=high`: FAIL | ||
| - Current finding: | ||
| - `next` reported `1` moderate severity vulnerability set, including request smuggling, unbounded cache growth, buffering DoS, and CSRF-related advisories per npm advisory feed. | ||
|
|
||
| ### Dependency inventory | ||
|
|
||
| - `npm ls --omit=dev --all --json` produced a machine-readable inventory at `/tmp/kpmg-readiness-20260319T191111Z/10-dependency-inventory.json` | ||
| - npm also reported `invalid: ws@8.17.1` on stderr. This should be triaged before diligence packaging. | ||
|
|
||
| ## Notable hardening added during this run | ||
|
|
||
| Minimal, audit-focused changes made to improve evidence quality: | ||
|
|
||
| 1. Added [health-endpoints.test.ts](/Users/christopher/Projects/TSREPO/trustsignal/apps/api/src/health-endpoints.test.ts) to prove `/api/v1/health` and `/api/v1/status` do not expose raw database initialization errors in public responses or logs. | ||
| 2. Hardened [server.ts](/Users/christopher/Projects/TSREPO/trustsignal/apps/api/src/server.ts) so database initialization failures expose `database_initialization_failed` rather than raw connection strings. | ||
| 3. Restored production fail-fast behavior in [registryLoader.ts](/Users/christopher/Projects/TSREPO/trustsignal/apps/api/src/registryLoader.ts) for missing `TRUST_REGISTRY_PUBLIC_KEY`. | ||
| 4. Isolated registry adapter test state in [registry-adapters.test.ts](/Users/christopher/Projects/TSREPO/trustsignal/apps/api/src/registry-adapters.test.ts) by clearing registry cache and job tables before the suite. | ||
| 5. Added a minimal no-op workspace build script in [package.json](/Users/christopher/Projects/TSREPO/trustsignal/apps/watcher/package.json) so root workspace build evidence is reproducible. | ||
|
|
There was a problem hiding this comment.
This report embeds absolute local paths (e.g., /Users/christopher/Projects/...) in multiple places. These paths aren’t portable and expose local environment details; please convert them to repo-relative paths (or remove them) so the document remains usable in other environments.
| return prisma.registryOracleJob.create({ | ||
| data: { | ||
| sourceId: seed.id, | ||
| subjectHash: createHash('sha256').update(`snapshot:${seed.id}`).digest('hex'), | ||
| zkCircuit: seed.zkCircuit, | ||
| inputCommitment: createHash('sha256').update(`${seed.id}:${Date.now()}`).digest('hex'), | ||
| jobType: 'INGEST', | ||
| status: 'QUEUED', | ||
| resultStatus: null | ||
| } as never | ||
| }); |
There was a problem hiding this comment.
createIngestJob also uses as never for the Prisma create payload, which disables compile-time validation for required/optional fields. Please replace this with a properly typed Prisma input (or fix client generation) so schema changes don’t silently break job creation.
| async reserve(input: ReplayReservationInput): Promise<boolean> { | ||
| const key = `trustsignal:assertion-jti:${input.clientId}:${hashAssertionJti(input.clientId, input.jti)}`; | ||
| const ttlSeconds = Math.max(1, Math.ceil((input.expiresAt.getTime() - Date.now()) / 1000)); | ||
| const connection = parseRedisUrl(this.redisUrl); | ||
| const socket = await connectRedis(connection); | ||
|
|
||
| try { | ||
| if (connection.password) { | ||
| if (connection.username) { | ||
| await sendRedisCommand(socket, ['AUTH', connection.username, connection.password]); | ||
| } else { | ||
| await sendRedisCommand(socket, ['AUTH', connection.password]); | ||
| } | ||
| } | ||
|
|
||
| if (typeof connection.database === 'number' && Number.isFinite(connection.database)) { | ||
| await sendRedisCommand(socket, ['SELECT', String(connection.database)]); | ||
| } | ||
|
|
||
| const reply = await sendRedisCommand(socket, ['SET', key, '1', 'EX', String(ttlSeconds), 'NX']); | ||
| return reply === 'OK'; | ||
| } finally { | ||
| socket.end(); | ||
| socket.destroy(); | ||
| } |
There was a problem hiding this comment.
RedisAssertionReplayStore.reserve() opens a new TCP/TLS connection to Redis for every reservation, authenticates/selects DB, issues a single command, and then destroys the socket. At moderate QPS this will be slow and can exhaust Redis/client resources. Please use a real Redis client with connection pooling (or keep a persistent connection) and add timeouts/retry/backoff behavior appropriate for auth-path dependencies.
| API_URL="https://api.trustsignal.dev" | ||
| API_KEY="d4a2bd92be56c54905a99f2b5709e14064e9eaeb99c44aa74898125aedf5028a" | ||
|
|
There was a problem hiding this comment.
This script commits what looks like a real API key (API_KEY=...). Even for test suites, secrets must not be checked into the repo. Please change this to read from an environment variable (or a local .env file) and update the script to fail with a clear message when the key is missing; also rotate/revoke the exposed key if it was valid.
| - `POST /api/v1/clients` | ||
| - `POST /api/v1/auth/register` | ||
| - `GET /api/v1/auth/login` | ||
| - `POST /api/v1/auth/login` | ||
| - `POST /api/v1/auth/logout` | ||
| - `GET /api/v1/oauth/authorize` | ||
| - `POST /api/v1/oauth/authorize/consent` | ||
| - `GET /api/v1/clients/:clientId/keys` | ||
| - `POST /api/v1/clients/:clientId/keys` | ||
| - `DELETE /api/v1/clients/:clientId/keys/:kid` | ||
| - `POST /api/v1/clients/:clientId/revoke` | ||
| - `POST /api/v1/token` | ||
| - `POST /api/v1/introspect` | ||
| - `POST /api/v1/verify` | ||
| - `GET /api/v1/receipt/:receiptId` | ||
| - `GET /api/v1/receipt/:receiptId/pdf` |
There was a problem hiding this comment.
This PR is labeled/docs-described as having “No code-path or runtime behavior changes”, but it includes substantial runtime changes (new auth/token/OAuth flows, DB schema/migrations, rate limiting, registry adapter behavior, Rust proof generation changes, etc.). Please update the PR description/scope or split this into separate PRs so reviewers can assess the behavioral changes appropriately.
| `CREATE TABLE IF NOT EXISTS "Client" ( | ||
| "id" TEXT PRIMARY KEY, | ||
| "name" TEXT, | ||
| "userEmail" TEXT, | ||
| "clientType" TEXT NOT NULL DEFAULT 'machine', | ||
| "scopes" TEXT NOT NULL DEFAULT 'verify read', | ||
| "jwks" JSONB, | ||
| "jwksUrl" TEXT, | ||
| "ownerUserId" TEXT REFERENCES "UserAccount"("id") ON DELETE SET NULL, | ||
| "subscriptionId" TEXT, | ||
| "plan" TEXT NOT NULL DEFAULT 'FREE', | ||
| "usageLimit" INTEGER NOT NULL DEFAULT 100, | ||
| "usageCount" INTEGER NOT NULL DEFAULT 0, | ||
| "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, | ||
| "createdBy" TEXT NOT NULL DEFAULT 'self-service', | ||
| "revokedAt" TIMESTAMP(3), | ||
| "lastUsedAt" TIMESTAMP(3) | ||
| )`, | ||
| `CREATE TABLE IF NOT EXISTS "ClientAssertionNonce" ( | ||
| "id" TEXT PRIMARY KEY, | ||
| "clientId" TEXT NOT NULL REFERENCES "Client"("id") ON DELETE CASCADE, | ||
| "jtiHash" TEXT NOT NULL, | ||
| "expiresAt" TIMESTAMP(3) NOT NULL, | ||
| "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP | ||
| )`, | ||
| `CREATE TABLE IF NOT EXISTS "ClientRedirectUri" ( | ||
| "id" TEXT PRIMARY KEY, | ||
| "clientId" TEXT NOT NULL REFERENCES "Client"("id") ON DELETE CASCADE, | ||
| "redirectUri" TEXT NOT NULL, | ||
| "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP | ||
| )`, | ||
| `CREATE TABLE IF NOT EXISTS "UserAccount" ( | ||
| "id" TEXT PRIMARY KEY, | ||
| "email" TEXT NOT NULL, | ||
| "displayName" TEXT, | ||
| "passwordHash" TEXT NOT NULL DEFAULT '', | ||
| "passwordSalt" TEXT NOT NULL DEFAULT '', | ||
| "disabledAt" TIMESTAMP(3), | ||
| "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, | ||
| "updatedAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, | ||
| "lastLoginAt" TIMESTAMP(3) | ||
| )`, | ||
| `CREATE TABLE IF NOT EXISTS "BrowserSession" ( | ||
| "id" TEXT PRIMARY KEY, | ||
| "userId" TEXT NOT NULL REFERENCES "UserAccount"("id") ON DELETE CASCADE, | ||
| "sessionTokenHash" TEXT NOT NULL, | ||
| "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, | ||
| "lastUsedAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, | ||
| "expiresAt" TIMESTAMP(3) NOT NULL, | ||
| "revokedAt" TIMESTAMP(3) | ||
| )`, | ||
| `CREATE TABLE IF NOT EXISTS "OAuthConsentGrant" ( | ||
| "id" TEXT PRIMARY KEY, | ||
| "clientId" TEXT NOT NULL REFERENCES "Client"("id") ON DELETE CASCADE, | ||
| "userId" TEXT NOT NULL REFERENCES "UserAccount"("id") ON DELETE CASCADE, | ||
| "grantedScopes" TEXT NOT NULL, | ||
| "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, | ||
| "updatedAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, | ||
| "revokedAt" TIMESTAMP(3) | ||
| )`, | ||
| `CREATE TABLE IF NOT EXISTS "OAuthAuthorizationRequest" ( | ||
| "id" TEXT PRIMARY KEY, | ||
| "clientId" TEXT NOT NULL REFERENCES "Client"("id") ON DELETE CASCADE, | ||
| "userId" TEXT REFERENCES "UserAccount"("id") ON DELETE CASCADE, | ||
| "redirectUri" TEXT NOT NULL, | ||
| "scope" TEXT NOT NULL, | ||
| "state" TEXT, | ||
| "codeChallenge" TEXT NOT NULL, | ||
| "codeChallengeMethod" TEXT NOT NULL DEFAULT 'S256', | ||
| "expiresAt" TIMESTAMP(3) NOT NULL, | ||
| "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, | ||
| "consumedAt" TIMESTAMP(3) | ||
| )`, | ||
| `CREATE TABLE IF NOT EXISTS "OAuthAuthorizationCode" ( | ||
| "id" TEXT PRIMARY KEY, | ||
| "clientId" TEXT NOT NULL REFERENCES "Client"("id") ON DELETE CASCADE, | ||
| "userId" TEXT NOT NULL REFERENCES "UserAccount"("id") ON DELETE CASCADE, | ||
| "codeHash" TEXT NOT NULL, | ||
| "redirectUri" TEXT NOT NULL, | ||
| "scope" TEXT NOT NULL, | ||
| "codeChallenge" TEXT NOT NULL, | ||
| "codeChallengeMethod" TEXT NOT NULL DEFAULT 'S256', | ||
| "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, | ||
| "expiresAt" TIMESTAMP(3) NOT NULL, | ||
| "usedAt" TIMESTAMP(3) | ||
| )`, |
There was a problem hiding this comment.
This file now defines significant schema (Client/OAuth/etc.) via runtime CREATE TABLE/ALTER TABLE statements, while the same schema is also introduced via Prisma migrations in this PR. Maintaining both as sources of truth is likely to drift over time and can create subtle environment differences (depending on whether migrations run). Consider consolidating on one mechanism (preferably migrations) or generating the runtime bootstrap from the migration history to guarantee they stay identical.
| - `/Users/christopher/Projects/TSREPO/trustsignal/coverage/coverage-summary.json` | ||
|
|
||
| ## In-repo supporting documents | ||
|
|
||
| - [Enterprise audit runbook](/Users/christopher/Projects/TSREPO/trustsignal/docs/compliance/kpmg-enterprise-audit-runbook.md) | ||
| - [Validation report](/Users/christopher/Projects/TSREPO/trustsignal/docs/compliance/kpmg-enterprise-readiness-validation-report.md) | ||
| - [Security checklist](/Users/christopher/Projects/TSREPO/trustsignal/SECURITY_CHECKLIST.md) | ||
| - [AI guardrails](/Users/christopher/Projects/TSREPO/trustsignal/docs/ai-guardrails.md) | ||
| - [SOC 2 readiness report](/Users/christopher/Projects/TSREPO/trustsignal/docs/compliance/soc2/readiness-report.md) | ||
| - [SOC 2 readiness checklist](/Users/christopher/Projects/TSREPO/trustsignal/docs/compliance/soc2/readiness-checklist.md) | ||
| - [Security posture summary](/Users/christopher/Projects/TSREPO/trustsignal/docs/compliance/security-posture.md) | ||
| - [Incident response policy](/Users/christopher/Projects/TSREPO/trustsignal/docs/compliance/policies/incident-response-policy.md) |
There was a problem hiding this comment.
This doc includes absolute local filesystem paths (e.g., /Users/christopher/...) which won’t resolve for other readers and leaks local machine details into the repo. Please replace these with repo-relative paths/links (or remove them) and keep any machine-specific artifact locations out of committed documentation.
| await this.prisma.clientAssertionNonce.deleteMany({ | ||
| where: { | ||
| expiresAt: { | ||
| lt: new Date() | ||
| } | ||
| } | ||
| }); |
There was a problem hiding this comment.
PrismaAssertionReplayStore.reserve() deletes all expired nonces on every call. Even with an index, this adds an extra write query to a latency-sensitive token path and can create avoidable DB churn. Consider moving cleanup to a periodic job (or doing opportunistic cleanup with a low probability / bounded batch size).
| await this.prisma.clientAssertionNonce.deleteMany({ | |
| where: { | |
| expiresAt: { | |
| lt: new Date() | |
| } | |
| } | |
| }); | |
| // Opportunistic cleanup: avoid running a write-heavy delete on every call | |
| // to keep the latency-sensitive token path fast. Expired nonces are still | |
| // cleaned up over time without adding churn to every request. | |
| if (Math.random() < 0.01) { | |
| await this.prisma.clientAssertionNonce.deleteMany({ | |
| where: { | |
| expiresAt: { | |
| lt: new Date() | |
| } | |
| } | |
| }); | |
| } |
| uses: actions/setup-node@v6 | ||
| uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0 | ||
| with: | ||
| node-version: 22 |
There was a problem hiding this comment.
The root package.json declares engines.node = 20.x, but this CI job sets node-version: 22. This can introduce hard-to-debug differences between CI and supported runtime (and may produce engine warnings or break builds). Please align this to Node 20 unless there’s a specific reason to test Node 22 here (in which case, consider testing both explicitly).
| node-version: 22 | |
| node-version: '20' |
| "passwordHash" TEXT NOT NULL DEFAULT '', | ||
| "passwordSalt" TEXT NOT NULL DEFAULT '', |
There was a problem hiding this comment.
UserAccount.passwordHash/passwordSalt are created with NOT NULL DEFAULT '' here, but the Prisma migration adds these as required fields without defaults. Defaulting to empty strings can permit creating user records without a real password hash/salt and makes the runtime-bootstrapped schema diverge from the migrated schema. Please align the DDL with the Prisma schema/migrations and avoid insecure defaults for credential fields.
| "passwordHash" TEXT NOT NULL DEFAULT '', | |
| "passwordSalt" TEXT NOT NULL DEFAULT '', | |
| "passwordHash" TEXT NOT NULL, | |
| "passwordSalt" TEXT NOT NULL, |
Standardizes canonical public messaging language and claims-boundary framing across README and boundary/security docs.\n\nScope:\n- README product-intro canonicalization\n- public security summary opening section normalization\n- public/private boundary responsibility wording update\n\nNo code-path or runtime behavior changes.