From bf962dd44fddbb5e2f2accf8981b308c27542a61 Mon Sep 17 00:00:00 2001 From: Sam Carson Date: Wed, 8 Apr 2026 05:33:27 -0500 Subject: [PATCH 1/7] Merge dev to main: GCM AAD binding, mock type safety, dependency updates (#84) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * chore(deps): bump github.com/go-jose/go-jose/v4 from 4.1.3 to 4.1.4 (#71) Bumps [github.com/go-jose/go-jose/v4](https://github.com/go-jose/go-jose) from 4.1.3 to 4.1.4. - [Release notes](https://github.com/go-jose/go-jose/releases) - [Commits](https://github.com/go-jose/go-jose/compare/v4.1.3...v4.1.4) --- updated-dependencies: - dependency-name: github.com/go-jose/go-jose/v4 dependency-version: 4.1.4 dependency-type: indirect ... Signed-off-by: dependabot[bot] Co-authored-by: Sam Carson Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump defu from 6.1.4 to 6.1.6 in /web (#72) Bumps [defu](https://github.com/unjs/defu) from 6.1.4 to 6.1.6. - [Release notes](https://github.com/unjs/defu/releases) - [Changelog](https://github.com/unjs/defu/blob/main/CHANGELOG.md) - [Commits](https://github.com/unjs/defu/compare/v6.1.4...v6.1.6) --- updated-dependencies: - dependency-name: defu dependency-version: 6.1.6 dependency-type: indirect ... Signed-off-by: dependabot[bot] Co-authored-by: Sam Carson Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump picomatch in /web (#66) Bumps and [picomatch](https://github.com/micromatch/picomatch). These dependencies needed to be updated together. Updates `picomatch` from 4.0.3 to 4.0.4 - [Release notes](https://github.com/micromatch/picomatch/releases) - [Changelog](https://github.com/micromatch/picomatch/blob/master/CHANGELOG.md) - [Commits](https://github.com/micromatch/picomatch/compare/4.0.3...4.0.4) Updates `picomatch` from 2.3.1 to 2.3.2 - [Release notes](https://github.com/micromatch/picomatch/releases) - [Changelog](https://github.com/micromatch/picomatch/blob/master/CHANGELOG.md) - [Commits](https://github.com/micromatch/picomatch/compare/4.0.3...4.0.4) --- updated-dependencies: - dependency-name: picomatch dependency-version: 4.0.4 dependency-type: indirect - dependency-name: picomatch dependency-version: 2.3.2 dependency-type: indirect ... Signed-off-by: dependabot[bot] Co-authored-by: Sam Carson Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump github.com/danielgtaylor/huma/v2 from 2.37.2 to 2.37.3 (#69) Bumps [github.com/danielgtaylor/huma/v2](https://github.com/danielgtaylor/huma) from 2.37.2 to 2.37.3. - [Release notes](https://github.com/danielgtaylor/huma/releases) - [Commits](https://github.com/danielgtaylor/huma/compare/v2.37.2...v2.37.3) --- updated-dependencies: - dependency-name: github.com/danielgtaylor/huma/v2 dependency-version: 2.37.3 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump google.golang.org/genai from 1.50.0 to 1.52.0 (#70) Bumps [google.golang.org/genai](https://github.com/googleapis/go-genai) from 1.50.0 to 1.52.0. - [Release notes](https://github.com/googleapis/go-genai/releases) - [Changelog](https://github.com/googleapis/go-genai/blob/v1.52.0/CHANGELOG.md) - [Commits](https://github.com/googleapis/go-genai/compare/v1.50.0...v1.52.0) --- updated-dependencies: - dependency-name: google.golang.org/genai dependency-version: 1.52.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump github.com/lib/pq from 1.11.2 to 1.12.0 (#60) Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.11.2 to 1.12.0. - [Release notes](https://github.com/lib/pq/releases) - [Changelog](https://github.com/lib/pq/blob/master/CHANGELOG.md) - [Commits](https://github.com/lib/pq/compare/v1.11.2...v1.12.0) --- updated-dependencies: - dependency-name: github.com/lib/pq dependency-version: 1.12.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump actions/setup-go from 6.3.0 to 6.4.0 (#68) Bumps [actions/setup-go](https://github.com/actions/setup-go) from 6.3.0 to 6.4.0. - [Release notes](https://github.com/actions/setup-go/releases) - [Commits](https://github.com/actions/setup-go/compare/4b73464bb391d4059bd26b0524d20df3927bd417...4a3601121dd01d1626a1e23e37211e3254c1c06c) --- updated-dependencies: - dependency-name: actions/setup-go dependency-version: 6.4.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump @tailwindcss/vite from 4.2.1 to 4.2.2 in /web (#65) Bumps [@tailwindcss/vite](https://github.com/tailwindlabs/tailwindcss/tree/HEAD/packages/@tailwindcss-vite) from 4.2.1 to 4.2.2. - [Release notes](https://github.com/tailwindlabs/tailwindcss/releases) - [Changelog](https://github.com/tailwindlabs/tailwindcss/blob/main/CHANGELOG.md) - [Commits](https://github.com/tailwindlabs/tailwindcss/commits/v4.2.2/packages/@tailwindcss-vite) --- updated-dependencies: - dependency-name: "@tailwindcss/vite" dependency-version: 4.2.2 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump yaml from 2.8.2 to 2.8.3 in /web (#67) Bumps [yaml](https://github.com/eemeli/yaml) from 2.8.2 to 2.8.3. - [Release notes](https://github.com/eemeli/yaml/releases) - [Commits](https://github.com/eemeli/yaml/compare/v2.8.2...v2.8.3) --- updated-dependencies: - dependency-name: yaml dependency-version: 2.8.3 dependency-type: indirect ... Signed-off-by: dependabot[bot] Co-authored-by: Sam Carson Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump vue-router from 5.0.3 to 5.0.4 in /web (#63) Bumps [vue-router](https://github.com/vuejs/router) from 5.0.3 to 5.0.4. - [Release notes](https://github.com/vuejs/router/releases) - [Commits](https://github.com/vuejs/router/compare/v5.0.3...v5.0.4) --- updated-dependencies: - dependency-name: vue-router dependency-version: 5.0.4 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump eslint from 10.0.3 to 10.1.0 in /web (#64) Bumps [eslint](https://github.com/eslint/eslint) from 10.0.3 to 10.1.0. - [Release notes](https://github.com/eslint/eslint/releases) - [Commits](https://github.com/eslint/eslint/compare/v10.0.3...v10.1.0) --- updated-dependencies: - dependency-name: eslint dependency-version: 10.1.0 dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump @vitest/eslint-plugin in /web (#62) Bumps [@vitest/eslint-plugin](https://github.com/vitest-dev/eslint-plugin-vitest) from 1.6.12 to 1.6.13. - [Release notes](https://github.com/vitest-dev/eslint-plugin-vitest/releases) - [Commits](https://github.com/vitest-dev/eslint-plugin-vitest/compare/v1.6.12...v1.6.13) --- updated-dependencies: - dependency-name: "@vitest/eslint-plugin" dependency-version: 1.6.13 dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump github.com/jackc/pgx/v5 from 5.8.0 to 5.9.1 (#59) Bumps [github.com/jackc/pgx/v5](https://github.com/jackc/pgx) from 5.8.0 to 5.9.1. - [Changelog](https://github.com/jackc/pgx/blob/master/CHANGELOG.md) - [Commits](https://github.com/jackc/pgx/compare/v5.8.0...v5.9.1) --- updated-dependencies: - dependency-name: github.com/jackc/pgx/v5 dependency-version: 5.9.1 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump vite from 7.3.1 to 8.0.1 in /web (#61) Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 7.3.1 to 8.0.1. - [Release notes](https://github.com/vitejs/vite/releases) - [Changelog](https://github.com/vitejs/vite/blob/main/packages/vite/CHANGELOG.md) - [Commits](https://github.com/vitejs/vite/commits/create-vite@8.0.1/packages/vite) --- updated-dependencies: - dependency-name: vite dependency-version: 8.0.1 dependency-type: direct:development update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): remove vite-plugin-vue-devtools (no Vite 8 support) The plugin's transitive dep vite-plugin-inspect doesn't support Vite 8 yet (vuejs/devtools#1071). It was installed but never registered in vite.config.ts. The Vue DevTools browser extension provides equivalent functionality. Re-add when upstream updates the peer dep range. Co-Authored-By: Claude Opus 4.6 (1M context) * feat(crypto): bind GCM ciphertext to entity context via AAD (#83) Add Additional Authenticated Data (AAD) to AES-256-GCM encrypt/decrypt, preventing ciphertext relocation between database rows. SSO client secrets are bound to org_id, MFA TOTP secrets to user_id, and the doctor encryption sentinel to a fixed label. Also adds dev/specs/sso-secret-storage.md documenting the full encryption architecture for external sharing. Co-authored-by: Claude Opus 4.6 (1M context) * chore(deps): bump reka-ui from 2.9.2 to 2.9.3 in /web (#80) Bumps [reka-ui](https://github.com/unovue/reka-ui) from 2.9.2 to 2.9.3. - [Release notes](https://github.com/unovue/reka-ui/releases) - [Commits](https://github.com/unovue/reka-ui/compare/v2.9.2...v2.9.3) --- updated-dependencies: - dependency-name: reka-ui dependency-version: 2.9.3 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump vue from 3.5.30 to 3.5.32 in /web (#79) Bumps [vue](https://github.com/vuejs/core) from 3.5.30 to 3.5.32. - [Release notes](https://github.com/vuejs/core/releases) - [Changelog](https://github.com/vuejs/core/blob/main/CHANGELOG.md) - [Commits](https://github.com/vuejs/core/compare/v3.5.30...v3.5.32) --- updated-dependencies: - dependency-name: vue dependency-version: 3.5.32 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump github.com/lib/pq from 1.12.0 to 1.12.3 (#76) Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.12.0 to 1.12.3. - [Release notes](https://github.com/lib/pq/releases) - [Changelog](https://github.com/lib/pq/blob/master/CHANGELOG.md) - [Commits](https://github.com/lib/pq/compare/v1.12.0...v1.12.3) --- updated-dependencies: - dependency-name: github.com/lib/pq dependency-version: 1.12.3 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump google.golang.org/genai from 1.52.0 to 1.52.1 (#75) Bumps [google.golang.org/genai](https://github.com/googleapis/go-genai) from 1.52.0 to 1.52.1. - [Release notes](https://github.com/googleapis/go-genai/releases) - [Changelog](https://github.com/googleapis/go-genai/blob/main/CHANGELOG.md) - [Commits](https://github.com/googleapis/go-genai/compare/v1.52.0...v1.52.1) --- updated-dependencies: - dependency-name: google.golang.org/genai dependency-version: 1.52.1 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps-dev): bump oxlint and eslint-plugin-oxlint to ~1.58.0 eslint-plugin-oxlint 1.58.0 adds a peerDependency on oxlint ~1.58.0, so both must be bumped together. Lint and tests verified. Closes #78. Co-Authored-By: Claude Opus 4.6 (1M context) * chore(deps-dev): bump @types/node from 24.12.0 to 25.5.2 in /web (#77) Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 24.12.0 to 25.5.2. - [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases) - [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node) --- updated-dependencies: - dependency-name: "@types/node" dependency-version: 25.5.2 dependency-type: direct:development update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): upgrade TypeScript 5.9 to 6.0 - Bump typescript from ~5.9.3 to ~6.0.2 - Bump @vue/tsconfig from ^0.9.0 to ^0.9.1 (adds TS6 peer support) - Remove deprecated baseUrl from tsconfig.json and tsconfig.app.json (TS6 resolves paths relative to the tsconfig file by default) Type-check, lint, and all 419 unit tests pass. Closes #81. Co-Authored-By: Claude Opus 4.6 (1M context) * revert(deps): revert TypeScript 6 upgrade, keep baseUrl removal openapi-typescript@7.13.0 requires peer typescript ^5.x with no TS6 support yet. Revert typescript and @vue/tsconfig version bumps. Keep the baseUrl removal from tsconfig.json and tsconfig.app.json — paths resolve relative to the tsconfig file without it on TS 5.9 too, and this prepares for TS6 when the ecosystem catches up. Co-Authored-By: Claude Opus 4.6 (1M context) * chore(lint): disable require-mock-type-parameters rule New in oxlint 1.58.0 under the correctness category. Requires type parameters on all vi.fn() calls — a style preference, not a correctness issue. Disable rather than modifying 147 test call sites. Co-Authored-By: Claude Opus 4.6 (1M context) * Revert "chore(lint): disable require-mock-type-parameters rule" This reverts commit 1763dbb38774fb662b2b4bad5946c43db00b28c7. * fix(lint): add type parameters to all vi.fn() mock calls oxlint 1.58.0 enables require-mock-type-parameters under correctness. Untyped vi.fn() returns Mock<(...args: any[]) => any>, silently discarding type safety on mock arguments and return values. Add explicit type parameters to all 147 vi.fn() call sites across 28 test files. All tests pass (419/419). Co-Authored-By: Claude Opus 4.6 (1M context) * fix(lint): use precise mock types where generic unknown breaks type-check Three files needed more specific type parameters than the generic (...args: unknown[]) => unknown pattern: - CreateWatchlistDialog: cast mock.calls access for body property access - client.test.ts: type fetchMock as typeof fetch (assigned to globalThis.fetch) - CveDetailView: type mockGET first arg as string (used in mockImplementation) Co-Authored-By: Claude Opus 4.6 (1M context) * fix(lint): match proxy signature to typed mockGET in CveDetailView test The spread proxy (...args: unknown[]) can't spread into a (string, ...unknown[]) parameter. Match the proxy's signature. Co-Authored-By: Claude Opus 4.6 (1M context) --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 (1M context) --- cmd/cvert-ops/rotate.go | 13 +- cmd/cvert-ops/rotate_test.go | 8 +- dev/specs/sso-secret-storage.md | 93 ++++ go.mod | 4 +- go.sum | 8 +- internal/api/auth_mfa.go | 8 +- internal/api/auth_mfa_test.go | 2 +- internal/api/oauth_oidc.go | 2 +- internal/api/sso.go | 4 +- internal/crypto/aes.go | 24 +- internal/crypto/aes_test.go | 140 +++++- internal/doctor/checks.go | 8 +- web/package-lock.json | 469 +++++++++--------- web/package.json | 10 +- .../components/__tests__/AppSidebar.test.ts | 10 +- .../cve/__tests__/CveResultsTable.test.ts | 9 +- .../cve/__tests__/CveSearchFilters.test.ts | 4 +- .../cve/__tests__/CveSourceComparison.test.ts | 8 +- .../settings/__tests__/GroupDialog.test.ts | 12 +- .../__tests__/GroupMembersDialog.test.ts | 25 +- .../__tests__/InviteMemberDialog.test.ts | 17 +- .../watchlist/__tests__/AddItemDialog.test.ts | 16 +- .../__tests__/CreateWatchlistDialog.test.ts | 21 +- web/src/lib/api/__tests__/client.test.ts | 10 +- web/src/router/__tests__/guards.test.ts | 4 +- web/src/stores/__tests__/auth.test.ts | 82 ++- web/src/views/__tests__/CreateOrgView.test.ts | 15 +- web/src/views/__tests__/CveDetailView.test.ts | 26 +- web/src/views/__tests__/CveSearchView.test.ts | 89 ++-- .../views/__tests__/FeedStatusView.test.ts | 10 +- .../__tests__/ForgotPasswordView.test.ts | 22 +- web/src/views/__tests__/GroupsView.test.ts | 18 +- .../views/__tests__/InvitationView.test.ts | 12 +- web/src/views/__tests__/LoginView.test.ts | 14 +- web/src/views/__tests__/MembersView.test.ts | 78 +-- web/src/views/__tests__/NotFoundView.test.ts | 4 +- web/src/views/__tests__/RegisterView.test.ts | 10 +- .../views/__tests__/ResetPasswordView.test.ts | 22 +- .../views/__tests__/VerifyEmailView.test.ts | 17 +- .../__tests__/WatchlistDetailView.test.ts | 18 +- .../views/__tests__/WatchlistListView.test.ts | 14 +- .../admin/__tests__/AdminSystemView.test.ts | 4 +- web/tsconfig.app.json | 1 - web/tsconfig.json | 1 - 44 files changed, 843 insertions(+), 543 deletions(-) create mode 100644 dev/specs/sso-secret-storage.md diff --git a/cmd/cvert-ops/rotate.go b/cmd/cvert-ops/rotate.go index 6eefecbf..e1153858 100644 --- a/cmd/cvert-ops/rotate.go +++ b/cmd/cvert-ops/rotate.go @@ -100,21 +100,22 @@ func rotateEncryptionKeys(ctx context.Context, pool *pgxpool.Pool, currentKey, p return 0, fmt.Errorf("set bypass_rls: %w", err) } - rows, err := tx.Query(ctx, "SELECT id, client_secret_enc FROM sso_connections") + rows, err := tx.Query(ctx, "SELECT id, org_id, client_secret_enc FROM sso_connections") if err != nil { return 0, fmt.Errorf("query sso_connections: %w", err) } defer rows.Close() type pending struct { - id string - enc []byte + id string + orgID [16]byte + enc []byte } var updates []pending for rows.Next() { var p pending - if err := rows.Scan(&p.id, &p.enc); err != nil { + if err := rows.Scan(&p.id, &p.orgID, &p.enc); err != nil { return 0, fmt.Errorf("scan row: %w", err) } updates = append(updates, p) @@ -125,12 +126,12 @@ func rotateEncryptionKeys(ctx context.Context, pool *pgxpool.Pool, currentKey, p count := 0 for _, u := range updates { - plaintext, err := crypto.DecryptWithFallback(currentKey, previousKey, u.enc) + plaintext, err := crypto.DecryptWithFallback(currentKey, previousKey, u.enc, u.orgID[:]) if err != nil { return 0, fmt.Errorf("decrypt row %s: %w", u.id, err) } - newEnc, err := crypto.Encrypt(currentKey, plaintext) + newEnc, err := crypto.Encrypt(currentKey, plaintext, u.orgID[:]) if err != nil { return 0, fmt.Errorf("re-encrypt row %s: %w", u.id, err) } diff --git a/cmd/cvert-ops/rotate_test.go b/cmd/cvert-ops/rotate_test.go index fc9acd0d..b851a37c 100644 --- a/cmd/cvert-ops/rotate_test.go +++ b/cmd/cvert-ops/rotate_test.go @@ -29,9 +29,9 @@ func TestRotateEncryptionKey_ReEncryptsAllValues(t *testing.T) { } orgID := org.ID - // Encrypt a secret with the old key. + // Encrypt a secret with the old key, bound to the org. secret := []byte("my-client-secret") - enc, err := crypto.Encrypt(oldKey, secret) + enc, err := crypto.Encrypt(oldKey, secret, orgID[:]) if err != nil { t.Fatalf("encrypt with old key: %v", err) } @@ -64,7 +64,7 @@ func TestRotateEncryptionKey_ReEncryptsAllValues(t *testing.T) { t.Fatalf("read re-encrypted value: %v", err) } - plaintext, err := crypto.Decrypt(newKey, reEncrypted) + plaintext, err := crypto.Decrypt(newKey, reEncrypted, orgID[:]) if err != nil { t.Fatalf("decrypt with new key failed: %v", err) } @@ -73,7 +73,7 @@ func TestRotateEncryptionKey_ReEncryptsAllValues(t *testing.T) { } // Verify old key alone no longer works. - _, err = crypto.Decrypt(oldKey, reEncrypted) + _, err = crypto.Decrypt(oldKey, reEncrypted, orgID[:]) if err == nil { t.Error("decrypt with old key should fail on re-encrypted data, but succeeded") } diff --git a/dev/specs/sso-secret-storage.md b/dev/specs/sso-secret-storage.md new file mode 100644 index 00000000..06692346 --- /dev/null +++ b/dev/specs/sso-secret-storage.md @@ -0,0 +1,93 @@ +# SSO Secret Storage Architecture + +This document describes how CVErt Ops stores and manages user-provided secrets (specifically, OAuth/OIDC client secrets for enterprise SSO connections) in the production/SaaS configuration. + +## What's Encrypted + +The **only** user-input secret encrypted at rest in the database is `sso_connections.client_secret_enc` — the OIDC client secret that tenants provide when configuring enterprise SSO. It is stored as `BYTEA` in Postgres (migration `000028_sso_connections.up.sql`). + +## Encryption Scheme + +**AES-256-GCM** with random 12-byte nonces, implemented in `internal/crypto/aes.go`. + +- **Ciphertext format:** `nonce (12 bytes) || ciphertext + GCM authentication tag` +- **Nonce source:** `crypto/rand.Reader` (OS CSPRNG) +- **Library:** Go stdlib `crypto/aes` and `crypto/cipher` — no external crypto dependencies + +AES-256-GCM provides both confidentiality and integrity (authenticated encryption). An attacker who obtains a database dump cannot read or tamper with the client secrets without also possessing the encryption key. + +## Key Sourcing + +The encryption key is a raw 32-byte value provided as 64 hex characters via: + +1. **Startup:** The `SSO_ENCRYPTION_KEY` environment variable, parsed by `internal/config/reloadable.go` +2. **Hot-reload:** A secrets file (one `KEY=VALUE` per line) can be reloaded at runtime via `SIGHUP` signal or the admin API reload endpoint. The key is swapped atomically using `atomic.Pointer` in `config.Holder`, so in-flight requests are never disrupted + +The API handler reads the active key via `srv.ssoEncryptionKey()` in `internal/api/sso.go`, which prefers the hot-reloadable config, falling back to the startup config value. + +## Key Rotation + +Key rotation uses a **dual-key** strategy with zero downtime: + +1. **Operator** generates a new 32-byte key (`openssl rand -hex 32`) +2. **Operator** moves the current `SSO_ENCRYPTION_KEY` value to `SSO_ENCRYPTION_KEY_PREVIOUS` and sets the new key as `SSO_ENCRYPTION_KEY` in the secrets file +3. **Operator** reloads config (SIGHUP or admin API) +4. **During the transition window**, all decryption uses `crypto.DecryptWithFallback()` — tries the current key first, then falls back to the previous key on GCM authentication failure. Structural errors (truncated ciphertext, invalid key length) fail fast without attempting fallback +5. **Operator** runs `cvert-ops rotate-encryption-key`, which re-encrypts every `sso_connections.client_secret_enc` row in a single Postgres transaction: decrypt with fallback, re-encrypt with current key +6. **After re-encryption succeeds**, the operator removes `SSO_ENCRYPTION_KEY_PREVIOUS` and reloads config + +The re-encryption command is transactional — if it fails partway through, the transaction rolls back and all rows remain encrypted with the original key. Safe to retry. + +The full step-by-step procedure is documented in `docs/deployment/runbooks/secret-rotation.md`. + +## Security Boundaries and Assumptions + +| Boundary | Status | +|----------|--------| +| **Encryption at rest** | AES-256-GCM. Protects against database dump or backup theft | +| **Tenant isolation** | Row-Level Security (RLS) on `sso_connections` + `org_id` scoping. One tenant cannot read another's encrypted secret | +| **Key storage** | The encryption key lives in an environment variable or secrets file on the host. There is no KMS or HSM wrapping — compromise of the application server's environment means compromise of the key | +| **Memory exposure** | The key is held in process memory as a `[32]byte`. Standard Go runtime — no `mlock` or secure memory wipe. Acceptable for non-HSM deployments | +| **Rotation atomicity** | The `rotate-encryption-key` command runs in a single DB transaction. Failure leaves all rows encrypted with the old key (safe to retry) | +| **No envelope encryption** | There is no KMS-wrapped DEK/KEK split. `SSO_ENCRYPTION_KEY` is the data encryption key directly. Key rotation therefore requires re-encrypting every row (currently only `sso_connections`, so the blast radius is small) | + +### Deployment expectation + +The security model assumes that the deployment environment adequately protects the `SSO_ENCRYPTION_KEY` value. In practice this means: + +- **Container deployments:** Use the platform's native secret injection (Kubernetes Secrets, Docker Swarm secrets, ECS task definition secrets, etc.) +- **Cloud VMs:** Use a cloud secret manager (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault) to inject the value into the environment at startup +- **Self-hosted:** Ensure the secrets file has restrictive file permissions and is excluded from backups and version control + +If CVErt Ops later needs to support a managed SaaS model where the operator controls infrastructure, the natural upgrade path would be envelope encryption with a cloud KMS wrapping the SSO encryption key. + +## What's NOT Encrypted at Rest + +These values are **not** stored in the database — they live only in environment variables or the secrets file: + +- OAuth provider secrets (`GITHUB_CLIENT_SECRET`, `GOOGLE_CLIENT_SECRET`) — app-level config, not tenant-provided +- JWT signing secrets (`JWT_SECRET`, `JWT_SECRET_PREVIOUS`) +- SMTP credentials (`SMTP_PASSWORD`) + +These values are stored in the database but use **hashing, not encryption** (correct approach — they never need to be recovered in plaintext): + +- User passwords — argon2id +- API key hashes + +## Key Files + +| File | Role | +|------|------| +| `internal/crypto/aes.go` | AES-256-GCM Encrypt / Decrypt / DecryptWithFallback | +| `internal/config/reloadable.go` | Hot-reloadable config with atomic key swap | +| `internal/api/sso.go` | SSO handler — encrypts on write, decrypts on read | +| `cmd/cvert-ops/rotate.go` | CLI re-encryption command | +| `migrations/000028_sso_connections.up.sql` | Schema with `client_secret_enc BYTEA` column + RLS | +| `internal/store/queries/sso.sql` | sqlc queries (encrypted column passed as opaque bytes) | +| `docs/deployment/runbooks/secret-rotation.md` | Operator-facing rotation procedures | + +## Dependencies + +- **Go stdlib crypto** (`crypto/aes`, `crypto/cipher`, `crypto/rand`) — no third-party crypto libraries +- **pgx** for the rotation transaction +- **Operator-managed key** — no external secrets manager SDK dependency diff --git a/go.mod b/go.mod index d1af5cbc..003b55fa 100644 --- a/go.mod +++ b/go.mod @@ -17,7 +17,7 @@ require ( github.com/golang-migrate/migrate/v4 v4.19.1 github.com/google/uuid v1.6.0 github.com/jackc/pgx/v5 v5.9.1 - github.com/lib/pq v1.12.0 + github.com/lib/pq v1.12.3 github.com/pquerna/otp v1.5.0 github.com/prometheus/client_golang v1.23.2 github.com/sony/gobreaker/v2 v2.4.0 @@ -32,7 +32,7 @@ require ( golang.org/x/crypto v0.49.0 golang.org/x/oauth2 v0.36.0 golang.org/x/time v0.15.0 - google.golang.org/genai v1.52.0 + google.golang.org/genai v1.52.1 ) require ( diff --git a/go.sum b/go.sum index 6da185c1..16586b1b 100644 --- a/go.sum +++ b/go.sum @@ -129,8 +129,8 @@ github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 h1:SOEGU9fKiNWd/HOJuq github.com/lann/builder v0.0.0-20180802200727-47ae307949d0/go.mod h1:dXGbAdH5GtBTC4WfIxhKZfyBF/HBFgRZSWwZ9g/He9o= github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 h1:P6pPBnrTSX3DEVR4fDembhRWSsG5rVo6hYhAB/ADZrk= github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0/go.mod h1:vmVJ0l/dxyfGW6FmdpVm2joNMFikkuWg0EoCKLGUMNw= -github.com/lib/pq v1.12.0 h1:mC1zeiNamwKBecjHarAr26c/+d8V5w/u4J0I/yASbJo= -github.com/lib/pq v1.12.0/go.mod h1:/p+8NSbOcwzAEI7wiMXFlgydTwcgTr3OSKMsD2BitpA= +github.com/lib/pq v1.12.3 h1:tTWxr2YLKwIvK90ZXEw8GP7UFHtcbTtty8zsI+YjrfQ= +github.com/lib/pq v1.12.3/go.mod h1:/p+8NSbOcwzAEI7wiMXFlgydTwcgTr3OSKMsD2BitpA= github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4= github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I= github.com/magiconair/properties v1.8.10 h1:s31yESBquKXCV9a/ScB3ESkOjUYYv+X0rg8SYxI99mE= @@ -276,8 +276,8 @@ golang.org/x/time v0.15.0/go.mod h1:Y4YMaQmXwGQZoFaVFk4YpCt4FLQMYKZe9oeV/f4MSno= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk= gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E= -google.golang.org/genai v1.52.0 h1:ekVIxWHtLUNbt+v0WWi4j3JT4yrHDEbysMcHQcaCQoI= -google.golang.org/genai v1.52.0/go.mod h1:A3kkl0nyBjyFlNjgxIwKq70julKbIxpSxqKO5gw/gmk= +google.golang.org/genai v1.52.1 h1:dYoljKtLDXMiBdVaClSJ/ZPwZ7j1N0lGjMhwOKOQUlk= +google.golang.org/genai v1.52.1/go.mod h1:A3kkl0nyBjyFlNjgxIwKq70julKbIxpSxqKO5gw/gmk= google.golang.org/genproto v0.0.0-20250603155806-513f23925822 h1:rHWScKit0gvAPuOnu87KpaYtjK5zBMLcULh7gxkCXu4= google.golang.org/genproto/googleapis/api v0.0.0-20260209200024-4cfbd4190f57 h1:JLQynH/LBHfCTSbDWl+py8C+Rg/k1OVH3xfcaiANuF0= google.golang.org/genproto/googleapis/api v0.0.0-20260209200024-4cfbd4190f57/go.mod h1:kSJwQxqmFXeo79zOmbrALdflXQeAYcUbgS7PbpMknCY= diff --git a/internal/api/auth_mfa.go b/internal/api/auth_mfa.go index 401590b1..0ff86a11 100644 --- a/internal/api/auth_mfa.go +++ b/internal/api/auth_mfa.go @@ -374,7 +374,7 @@ func (srv *Server) verifyTOTP(ctx context.Context, userID uuid.UUID, code string return false, fmt.Errorf("encryption key: %w", err) } prevKey := srv.ssoEncryptionKeyPrevious() - secretBytes, err := crypto.DecryptWithFallback(encKey, prevKey, cred.SecretEnc) + secretBytes, err := crypto.DecryptWithFallback(encKey, prevKey, cred.SecretEnc, userID[:]) if err != nil { return false, fmt.Errorf("decrypt TOTP secret: %w", err) } @@ -580,7 +580,7 @@ func (srv *Server) mfaTOTPSetupHandler(ctx context.Context, input *mfaTOTPSetupI slog.ErrorContext(ctx, "totp-setup: encryption key", "error", err) return nil, huma.Error500InternalServerError("encryption key not configured") } - secretEnc, err := crypto.Encrypt(encKey, []byte(key.Secret())) + secretEnc, err := crypto.Encrypt(encKey, []byte(key.Secret()), userID[:]) if err != nil { slog.ErrorContext(ctx, "totp-setup: encrypt secret", "error", err) return nil, huma.Error500InternalServerError("internal error") @@ -645,7 +645,7 @@ func (srv *Server) mfaTOTPConfirmHandler(ctx context.Context, input *mfaTOTPConf return nil, huma.Error500InternalServerError("internal error") } prevKey := srv.ssoEncryptionKeyPrevious() - secretBytes, err := crypto.DecryptWithFallback(encKey, prevKey, enrollClaims.SecretEnc) + secretBytes, err := crypto.DecryptWithFallback(encKey, prevKey, enrollClaims.SecretEnc, userID[:]) if err != nil { slog.ErrorContext(ctx, "totp-confirm: decrypt secret", "error", err) return nil, huma.Error500InternalServerError("internal error") @@ -677,7 +677,7 @@ func (srv *Server) mfaTOTPConfirmHandler(ctx context.Context, input *mfaTOTPConf // Re-encrypt secret for DB storage (enrollment cookie used same key, but // re-encrypt to get a fresh nonce for defense in depth). - secretEncDB, err := crypto.Encrypt(encKey, secretBytes) + secretEncDB, err := crypto.Encrypt(encKey, secretBytes, userID[:]) if err != nil { slog.ErrorContext(ctx, "totp-confirm: re-encrypt secret", "error", err) return nil, huma.Error500InternalServerError("internal error") diff --git a/internal/api/auth_mfa_test.go b/internal/api/auth_mfa_test.go index 0fa9b817..f4531871 100644 --- a/internal/api/auth_mfa_test.go +++ b/internal/api/auth_mfa_test.go @@ -55,7 +55,7 @@ func enrollTOTP(t *testing.T, ctx context.Context, srv *Server, userID uuid.UUID if err != nil { t.Fatalf("enrollTOTP: encryption key: %v", err) } - secretEnc, err := crypto.Encrypt(encKey, []byte(secret)) + secretEnc, err := crypto.Encrypt(encKey, []byte(secret), userID[:]) if err != nil { t.Fatalf("enrollTOTP: encrypt: %v", err) } diff --git a/internal/api/oauth_oidc.go b/internal/api/oauth_oidc.go index 1b32e807..245764a0 100644 --- a/internal/api/oauth_oidc.go +++ b/internal/api/oauth_oidc.go @@ -53,7 +53,7 @@ func (srv *Server) oidcBuildOAuthConfig(ctx context.Context, conn *store.SSOConn if err != nil { return nil, nil, fmt.Errorf("encryption key: %w", err) } - secret, err := crypto.DecryptWithFallback(key, srv.ssoEncryptionKeyPrevious(), conn.ClientSecretEnc) + secret, err := crypto.DecryptWithFallback(key, srv.ssoEncryptionKeyPrevious(), conn.ClientSecretEnc, conn.OrgID[:]) if err != nil { return nil, nil, fmt.Errorf("decrypt secret: %w", err) } diff --git a/internal/api/sso.go b/internal/api/sso.go index 7a5a657a..bc9af9ef 100644 --- a/internal/api/sso.go +++ b/internal/api/sso.go @@ -171,7 +171,7 @@ func (srv *Server) createSSOHandler(w http.ResponseWriter, r *http.Request) { writeProblem(w, http.StatusInternalServerError, "server configuration error") return } - encSecret, err := crypto.Encrypt(key, []byte(req.ClientSecret)) + encSecret, err := crypto.Encrypt(key, []byte(req.ClientSecret), orgID[:]) if err != nil { slog.ErrorContext(r.Context(), "sso create: encrypt secret", "error", err) writeProblem(w, http.StatusInternalServerError, "encryption error") @@ -347,7 +347,7 @@ func (srv *Server) patchSSOHandler(w http.ResponseWriter, r *http.Request) { writeProblem(w, http.StatusInternalServerError, "server configuration error") return } - secretEnc, err = crypto.Encrypt(key, []byte(*req.ClientSecret)) + secretEnc, err = crypto.Encrypt(key, []byte(*req.ClientSecret), orgID[:]) if err != nil { slog.ErrorContext(r.Context(), "sso patch: encrypt secret", "error", err) writeProblem(w, http.StatusInternalServerError, "encryption error") diff --git a/internal/crypto/aes.go b/internal/crypto/aes.go index 2d88fe65..97d801a8 100644 --- a/internal/crypto/aes.go +++ b/internal/crypto/aes.go @@ -15,9 +15,10 @@ import ( // authentication fails and previousKey is non-zero, it retries with // previousKey. This supports seamless encryption key rotation. // Structural errors (truncated ciphertext, invalid key) fail immediately -// without attempting fallback. -func DecryptWithFallback(currentKey, previousKey [32]byte, data []byte) ([]byte, error) { - plaintext, err := Decrypt(currentKey, data) +// without attempting fallback. The aad (additional authenticated data) is +// passed through to GCM and must match the value used during encryption. +func DecryptWithFallback(currentKey, previousKey [32]byte, data []byte, aad []byte) ([]byte, error) { + plaintext, err := Decrypt(currentKey, data, aad) if err == nil { return plaintext, nil } @@ -25,7 +26,7 @@ func DecryptWithFallback(currentKey, previousKey [32]byte, data []byte) ([]byte, // Only fall back on GCM authentication failure (wrong key). // Structural errors (truncated ciphertext, invalid key) fail fast. if previousKey != [32]byte{} && isGCMAuthError(err) { - plaintext, err2 := Decrypt(previousKey, data) + plaintext, err2 := Decrypt(previousKey, data, aad) if err2 == nil { return plaintext, nil } @@ -42,8 +43,10 @@ func isGCMAuthError(err error) bool { } // Encrypt encrypts plaintext using AES-256-GCM with a random nonce. -// Returns nonce || ciphertext. -func Encrypt(key [32]byte, plaintext []byte) ([]byte, error) { +// Returns nonce || ciphertext. The aad (additional authenticated data) is +// mixed into the GCM authentication tag, binding the ciphertext to a context +// (e.g., an org_id or user_id). Pass nil for context-free encryption. +func Encrypt(key [32]byte, plaintext []byte, aad []byte) ([]byte, error) { block, err := aes.NewCipher(key[:]) if err != nil { return nil, fmt.Errorf("aes new cipher: %w", err) @@ -60,12 +63,13 @@ func Encrypt(key [32]byte, plaintext []byte) ([]byte, error) { } // Seal appends ciphertext to nonce, so result is nonce || ciphertext. - return gcm.Seal(nonce, nonce, plaintext, nil), nil + return gcm.Seal(nonce, nonce, plaintext, aad), nil } // Decrypt decrypts AES-256-GCM ciphertext produced by Encrypt. -// Expects nonce (12 bytes) || ciphertext. -func Decrypt(key [32]byte, data []byte) ([]byte, error) { +// Expects nonce (12 bytes) || ciphertext. The aad must match the value +// used during encryption; a mismatch causes an authentication failure. +func Decrypt(key [32]byte, data []byte, aad []byte) ([]byte, error) { block, err := aes.NewCipher(key[:]) if err != nil { return nil, fmt.Errorf("aes new cipher: %w", err) @@ -82,7 +86,7 @@ func Decrypt(key [32]byte, data []byte) ([]byte, error) { } nonce, ciphertext := data[:nonceSize], data[nonceSize:] - plaintext, err := gcm.Open(nil, nonce, ciphertext, nil) + plaintext, err := gcm.Open(nil, nonce, ciphertext, aad) if err != nil { return nil, fmt.Errorf("gcm decrypt: %w", err) } diff --git a/internal/crypto/aes_test.go b/internal/crypto/aes_test.go index 1c5acf1a..2537ce87 100644 --- a/internal/crypto/aes_test.go +++ b/internal/crypto/aes_test.go @@ -23,12 +23,12 @@ func TestAESGCM_RoundTrip(t *testing.T) { key := testKey(t) plaintext := []byte("secret webhook signing key 🔑") - ciphertext, err := Encrypt(key, plaintext) + ciphertext, err := Encrypt(key, plaintext, nil) if err != nil { t.Fatalf("Encrypt: %v", err) } - got, err := Decrypt(key, ciphertext) + got, err := Decrypt(key, ciphertext, nil) if err != nil { t.Fatalf("Decrypt: %v", err) } @@ -37,16 +37,73 @@ func TestAESGCM_RoundTrip(t *testing.T) { } } +func TestAESGCM_RoundTrip_WithAAD(t *testing.T) { + t.Parallel() + key := testKey(t) + plaintext := []byte("org-scoped secret") + aad := []byte("org-id-abc-123") + + ciphertext, err := Encrypt(key, plaintext, aad) + if err != nil { + t.Fatalf("Encrypt: %v", err) + } + + got, err := Decrypt(key, ciphertext, aad) + if err != nil { + t.Fatalf("Decrypt: %v", err) + } + if !bytes.Equal(got, plaintext) { + t.Errorf("round-trip mismatch: got %q, want %q", got, plaintext) + } +} + +func TestAESGCM_AADMismatch_Rejected(t *testing.T) { + t.Parallel() + key := testKey(t) + plaintext := []byte("bound to org A") + aadA := []byte("org-A") + aadB := []byte("org-B") + + ciphertext, err := Encrypt(key, plaintext, aadA) + if err != nil { + t.Fatalf("Encrypt: %v", err) + } + + // Decrypting with different AAD must fail (ciphertext relocation attack). + _, err = Decrypt(key, ciphertext, aadB) + if err == nil { + t.Error("Decrypt succeeded with wrong AAD, want authentication failure") + } +} + +func TestAESGCM_AADVsNilAAD_Rejected(t *testing.T) { + t.Parallel() + key := testKey(t) + plaintext := []byte("has AAD binding") + aad := []byte("some-context") + + ciphertext, err := Encrypt(key, plaintext, aad) + if err != nil { + t.Fatalf("Encrypt: %v", err) + } + + // Encrypted with AAD, decrypted without — must fail. + _, err = Decrypt(key, ciphertext, nil) + if err == nil { + t.Error("Decrypt with nil AAD succeeded on AAD-encrypted data, want failure") + } +} + func TestAESGCM_UniqueNonce(t *testing.T) { t.Parallel() key := testKey(t) plaintext := []byte("same input") - ct1, err := Encrypt(key, plaintext) + ct1, err := Encrypt(key, plaintext, nil) if err != nil { t.Fatalf("Encrypt 1: %v", err) } - ct2, err := Encrypt(key, plaintext) + ct2, err := Encrypt(key, plaintext, nil) if err != nil { t.Fatalf("Encrypt 2: %v", err) } @@ -60,7 +117,7 @@ func TestAESGCM_TamperedCiphertext(t *testing.T) { t.Parallel() key := testKey(t) - ciphertext, err := Encrypt(key, []byte("tamper me")) + ciphertext, err := Encrypt(key, []byte("tamper me"), nil) if err != nil { t.Fatalf("Encrypt: %v", err) } @@ -70,7 +127,7 @@ func TestAESGCM_TamperedCiphertext(t *testing.T) { copy(tampered, ciphertext) tampered[len(tampered)-1] ^= 0xff - _, err = Decrypt(key, tampered) + _, err = Decrypt(key, tampered, nil) if err == nil { t.Error("Decrypt succeeded on tampered ciphertext, want error") } @@ -81,12 +138,12 @@ func TestAESGCM_WrongKey(t *testing.T) { key1 := testKey(t) key2 := testKey(t) - ciphertext, err := Encrypt(key1, []byte("wrong key test")) + ciphertext, err := Encrypt(key1, []byte("wrong key test"), nil) if err != nil { t.Fatalf("Encrypt: %v", err) } - _, err = Decrypt(key2, ciphertext) + _, err = Decrypt(key2, ciphertext, nil) if err == nil { t.Error("Decrypt succeeded with wrong key, want error") } @@ -96,12 +153,12 @@ func TestAESGCM_EmptyPlaintext(t *testing.T) { t.Parallel() key := testKey(t) - ciphertext, err := Encrypt(key, []byte{}) + ciphertext, err := Encrypt(key, []byte{}, nil) if err != nil { t.Fatalf("Encrypt empty: %v", err) } - got, err := Decrypt(key, ciphertext) + got, err := Decrypt(key, ciphertext, nil) if err != nil { t.Fatalf("Decrypt empty: %v", err) } @@ -116,7 +173,7 @@ func TestAESGCM_ShortCiphertext(t *testing.T) { // ciphertext too short to contain a nonce is rejected at runtime. key := testKey(t) - _, err := Decrypt(key, []byte("short")) + _, err := Decrypt(key, []byte("short"), nil) if err == nil { t.Error("Decrypt succeeded on too-short ciphertext, want error") } @@ -130,12 +187,12 @@ func TestDecryptWithFallback_CurrentKeyWorks(t *testing.T) { previousKey := testKey(t) plaintext := []byte("current key decryption") - ciphertext, err := Encrypt(currentKey, plaintext) + ciphertext, err := Encrypt(currentKey, plaintext, nil) if err != nil { t.Fatalf("Encrypt: %v", err) } - got, err := DecryptWithFallback(currentKey, previousKey, ciphertext) + got, err := DecryptWithFallback(currentKey, previousKey, ciphertext, nil) if err != nil { t.Fatalf("DecryptWithFallback: %v", err) } @@ -150,13 +207,13 @@ func TestDecryptWithFallback_PreviousKeyWorks(t *testing.T) { newKey := testKey(t) plaintext := []byte("encrypted with old key") - ciphertext, err := Encrypt(oldKey, plaintext) + ciphertext, err := Encrypt(oldKey, plaintext, nil) if err != nil { t.Fatalf("Encrypt: %v", err) } // newKey as current fails GCM auth; oldKey as previous succeeds. - got, err := DecryptWithFallback(newKey, oldKey, ciphertext) + got, err := DecryptWithFallback(newKey, oldKey, ciphertext, nil) if err != nil { t.Fatalf("DecryptWithFallback: %v", err) } @@ -172,12 +229,12 @@ func TestDecryptWithFallback_BothKeysWrong(t *testing.T) { keyC := testKey(t) plaintext := []byte("neither key works") - ciphertext, err := Encrypt(keyA, plaintext) + ciphertext, err := Encrypt(keyA, plaintext, nil) if err != nil { t.Fatalf("Encrypt: %v", err) } - _, err = DecryptWithFallback(keyB, keyC, ciphertext) + _, err = DecryptWithFallback(keyB, keyC, ciphertext, nil) if err == nil { t.Error("DecryptWithFallback succeeded with both wrong keys, want error") } @@ -189,13 +246,13 @@ func TestDecryptWithFallback_NoPreviousKey(t *testing.T) { var zeroKey [32]byte plaintext := []byte("no previous key") - ciphertext, err := Encrypt(currentKey, plaintext) + ciphertext, err := Encrypt(currentKey, plaintext, nil) if err != nil { t.Fatalf("Encrypt: %v", err) } // Zero previous key → only current key tried. - got, err := DecryptWithFallback(currentKey, zeroKey, ciphertext) + got, err := DecryptWithFallback(currentKey, zeroKey, ciphertext, nil) if err != nil { t.Fatalf("DecryptWithFallback: %v", err) } @@ -212,7 +269,7 @@ func TestDecryptWithFallback_TruncatedCiphertext_NoFallback(t *testing.T) { previousKey := [32]byte{2} shortData := []byte("short") - _, err := DecryptWithFallback(currentKey, previousKey, shortData) + _, err := DecryptWithFallback(currentKey, previousKey, shortData, nil) if err == nil { t.Fatal("DecryptWithFallback succeeded on truncated ciphertext, want error") } @@ -231,14 +288,53 @@ func TestDecryptWithFallback_NoPreviousKeyCurrentFails(t *testing.T) { keyB := testKey(t) var zeroKey [32]byte - ciphertext, err := Encrypt(keyA, []byte("no previous key fails")) + ciphertext, err := Encrypt(keyA, []byte("no previous key fails"), nil) if err != nil { t.Fatalf("Encrypt: %v", err) } // Wrong current key, zero previous → returns error without panic. - _, err = DecryptWithFallback(keyB, zeroKey, ciphertext) + _, err = DecryptWithFallback(keyB, zeroKey, ciphertext, nil) if err == nil { t.Error("DecryptWithFallback succeeded with wrong current and zero previous, want error") } } + +func TestDecryptWithFallback_WithAAD(t *testing.T) { + t.Parallel() + currentKey := testKey(t) + previousKey := testKey(t) + plaintext := []byte("aad-bound secret") + aad := []byte("org-id-bytes") + + ciphertext, err := Encrypt(currentKey, plaintext, aad) + if err != nil { + t.Fatalf("Encrypt: %v", err) + } + + got, err := DecryptWithFallback(currentKey, previousKey, ciphertext, aad) + if err != nil { + t.Fatalf("DecryptWithFallback: %v", err) + } + if !bytes.Equal(got, plaintext) { + t.Errorf("plaintext mismatch: got %q, want %q", got, plaintext) + } +} + +func TestDecryptWithFallback_AADMismatch_Rejected(t *testing.T) { + t.Parallel() + currentKey := testKey(t) + var zeroKey [32]byte + plaintext := []byte("bound to org A") + + ciphertext, err := Encrypt(currentKey, plaintext, []byte("org-A")) + if err != nil { + t.Fatalf("Encrypt: %v", err) + } + + // Correct key but wrong AAD must fail. + _, err = DecryptWithFallback(currentKey, zeroKey, ciphertext, []byte("org-B")) + if err == nil { + t.Error("DecryptWithFallback succeeded with wrong AAD, want error") + } +} diff --git a/internal/doctor/checks.go b/internal/doctor/checks.go index 4f2bbf57..5593812e 100644 --- a/internal/doctor/checks.go +++ b/internal/doctor/checks.go @@ -182,7 +182,7 @@ func (c *EncryptionSentinelCheck) Run(ctx context.Context) (string, string, erro return StatusFail, fmt.Sprintf("query system_settings: %v", err), nil } - _, err = crypto.DecryptWithFallback(c.Key, c.PreviousKey, value) + _, err = crypto.DecryptWithFallback(c.Key, c.PreviousKey, value, []byte("encryption_sentinel")) if err != nil { return StatusFail, fmt.Sprintf("sentinel decryption failed: %v — encryption key may have changed", err), nil } @@ -344,8 +344,8 @@ func (c *SecurityHeadersCheck) Run(ctx context.Context) (string, string, error) required := map[string]string{ "X-Content-Type-Options": "nosniff", - "X-Frame-Options": "DENY", - "Referrer-Policy": "strict-origin-when-cross-origin", + "X-Frame-Options": "DENY", + "Referrer-Policy": "strict-origin-when-cross-origin", } var missing []string @@ -444,7 +444,7 @@ type StandardChecksConfig struct { SMTPHost string SMTPPort int SMTPUsername string - CORSAllowedOrigins string + CORSAllowedOrigins string CookieAuth bool ServerAddr string // empty in CLI mode, "http://localhost:{port}" in API mode } diff --git a/web/package-lock.json b/web/package-lock.json index 08e4a40d..370f0c6f 100644 --- a/web/package-lock.json +++ b/web/package-lock.json @@ -16,16 +16,16 @@ "lucide-vue-next": "^0.577.0", "openapi-fetch": "^0.17.0", "pinia": "^3.0.4", - "reka-ui": "^2.9.1", + "reka-ui": "^2.9.3", "tailwind-merge": "^3.5.0", "tailwindcss": "^4.2.1", - "vue": "^3.5.30", + "vue": "^3.5.32", "vue-router": "^5.0.4", "vue-sonner": "^2.0.9" }, "devDependencies": { "@tsconfig/node24": "^24.0.4", - "@types/node": "^24.11.0", + "@types/node": "^25.5.2", "@vitejs/plugin-vue": "^6.0.4", "@vitest/eslint-plugin": "^1.6.13", "@vue/eslint-config-typescript": "^14.7.0", @@ -33,13 +33,13 @@ "@vue/tsconfig": "^0.9.0", "eslint": "^10.1.0", "eslint-config-prettier": "^10.1.8", - "eslint-plugin-oxlint": "~1.56.0", + "eslint-plugin-oxlint": "~1.58.0", "eslint-plugin-vue": "~10.8.0", "jiti": "^2.6.1", "jsdom": "^29.0.0", "npm-run-all2": "^8.0.4", "openapi-typescript": "^7.13.0", - "oxlint": "~1.56.0", + "oxlint": "~1.58.0", "prettier": "3.8.1", "tw-animate-css": "^1.4.0", "typescript": "~5.9.3", @@ -756,9 +756,9 @@ } }, "node_modules/@oxlint/binding-android-arm-eabi": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-android-arm-eabi/-/binding-android-arm-eabi-1.56.0.tgz", - "integrity": "sha512-IyfYPthZyiSKwAv/dLjeO18SaK8MxLI9Yss2JrRDyweQAkuL3LhEy7pwIwI7uA3KQc1Vdn20kdmj3q0oUIQL6A==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-android-arm-eabi/-/binding-android-arm-eabi-1.58.0.tgz", + "integrity": "sha512-1T7UN3SsWWxpWyWGn1cT3ASNJOo+pI3eUkmEl7HgtowapcV8kslYpFQcYn431VuxghXakPNlbjRwhqmR37PFOg==", "cpu": [ "arm" ], @@ -773,9 +773,9 @@ } }, "node_modules/@oxlint/binding-android-arm64": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-android-arm64/-/binding-android-arm64-1.56.0.tgz", - "integrity": "sha512-Ga5zYrzH6vc/VFxhn6MmyUnYEfy9vRpwTIks99mY3j6Nz30yYpIkWryI0QKPCgvGUtDSXVLEaMum5nA+WrNOSg==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-android-arm64/-/binding-android-arm64-1.58.0.tgz", + "integrity": "sha512-GryzujxuiRv2YFF7bRy8mKcxlbuAN+euVUtGJt9KKbLT8JBUIosamVhcthLh+VEr6KE6cjeVMAQxKAzJcoN7dg==", "cpu": [ "arm64" ], @@ -790,9 +790,9 @@ } }, "node_modules/@oxlint/binding-darwin-arm64": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-darwin-arm64/-/binding-darwin-arm64-1.56.0.tgz", - "integrity": "sha512-ogmbdJysnw/D4bDcpf1sPLpFThZ48lYp4aKYm10Z/6Nh1SON6NtnNhTNOlhEY296tDFItsZUz+2tgcSYqh8Eyw==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-darwin-arm64/-/binding-darwin-arm64-1.58.0.tgz", + "integrity": "sha512-7/bRSJIwl4GxeZL9rPZ11anNTyUO9epZrfEJH/ZMla3+/gbQ6xZixh9nOhsZ0QwsTW7/5J2A/fHbD1udC5DQQA==", "cpu": [ "arm64" ], @@ -807,9 +807,9 @@ } }, "node_modules/@oxlint/binding-darwin-x64": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-darwin-x64/-/binding-darwin-x64-1.56.0.tgz", - "integrity": "sha512-x8QE1h+RAtQ2g+3KPsP6Fk/tdz6zJQUv5c7fTrJxXV3GHOo+Ry5p/PsogU4U+iUZg0rj6hS+E4xi+mnwwlDCWQ==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-darwin-x64/-/binding-darwin-x64-1.58.0.tgz", + "integrity": "sha512-EqdtJSiHweS2vfILNrpyJ6HUwpEq2g7+4Zx1FPi4hu3Hu7tC3znF6ufbXO8Ub2LD4mGgznjI7kSdku9NDD1Mkg==", "cpu": [ "x64" ], @@ -824,9 +824,9 @@ } }, "node_modules/@oxlint/binding-freebsd-x64": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-freebsd-x64/-/binding-freebsd-x64-1.56.0.tgz", - "integrity": "sha512-6G+WMZvwJpMvY7my+/SHEjb7BTk/PFbePqLpmVmUJRIsJMy/UlyYqjpuh0RCgYYkPLcnXm1rUM04kbTk8yS1Yg==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-freebsd-x64/-/binding-freebsd-x64-1.58.0.tgz", + "integrity": "sha512-VQt5TH4M42mY20F545G637RKxV/yjwVtKk2vfXuazfReSIiuvWBnv+FVSvIV5fKVTJNjt3GSJibh6JecbhGdBw==", "cpu": [ "x64" ], @@ -841,9 +841,9 @@ } }, "node_modules/@oxlint/binding-linux-arm-gnueabihf": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm-gnueabihf/-/binding-linux-arm-gnueabihf-1.56.0.tgz", - "integrity": "sha512-YYHBsk/sl7fYwQOok+6W5lBPeUEvisznV/HZD2IfZmF3Bns6cPC3Z0vCtSEOaAWTjYWN3jVsdu55jMxKlsdlhg==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm-gnueabihf/-/binding-linux-arm-gnueabihf-1.58.0.tgz", + "integrity": "sha512-fBYcj4ucwpAtjJT3oeBdFBYKvNyjRSK+cyuvBOTQjh0jvKp4yeA4S/D0IsCHus/VPaNG5L48qQkh+Vjy3HL2/Q==", "cpu": [ "arm" ], @@ -858,9 +858,9 @@ } }, "node_modules/@oxlint/binding-linux-arm-musleabihf": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm-musleabihf/-/binding-linux-arm-musleabihf-1.56.0.tgz", - "integrity": "sha512-+AZK8rOUr78y8WT6XkDb04IbMRqauNV+vgT6f8ZLOH8wnpQ9i7Nol0XLxAu+Cq7Sb+J9wC0j6Km5hG8rj47/yQ==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm-musleabihf/-/binding-linux-arm-musleabihf-1.58.0.tgz", + "integrity": "sha512-0BeuFfwlUHlJ1xpEdSD1YO3vByEFGPg36uLjK1JgFaxFb4W6w17F8ET8sz5cheZ4+x5f2xzdnRrrWv83E3Yd8g==", "cpu": [ "arm" ], @@ -875,9 +875,9 @@ } }, "node_modules/@oxlint/binding-linux-arm64-gnu": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm64-gnu/-/binding-linux-arm64-gnu-1.56.0.tgz", - "integrity": "sha512-urse2SnugwJRojUkGSSeH2LPMaje5Q50yQtvtL9HFckiyeqXzoFwOAZqD5TR29R2lq7UHidfFDM9EGcchcbb8A==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm64-gnu/-/binding-linux-arm64-gnu-1.58.0.tgz", + "integrity": "sha512-TXlZgnPTlxrQzxG9ZXU7BNwx1Ilrr17P3GwZY0If2EzrinqRH3zXPc3HrRcBJgcsoZNMuNL5YivtkJYgp467UQ==", "cpu": [ "arm64" ], @@ -892,9 +892,9 @@ } }, "node_modules/@oxlint/binding-linux-arm64-musl": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm64-musl/-/binding-linux-arm64-musl-1.56.0.tgz", - "integrity": "sha512-rkTZkBfJ4TYLjansjSzL6mgZOdN5IvUnSq3oNJSLwBcNvy3dlgQtpHPrRxrCEbbcp7oQ6If0tkNaqfOsphYZ9g==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm64-musl/-/binding-linux-arm64-musl-1.58.0.tgz", + "integrity": "sha512-zSoYRo5dxHLcUx93Stl2hW3hSNjPt99O70eRVWt5A1zwJ+FPjeCCANCD2a9R4JbHsdcl11TIQOjyigcRVOH2mw==", "cpu": [ "arm64" ], @@ -909,9 +909,9 @@ } }, "node_modules/@oxlint/binding-linux-ppc64-gnu": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-ppc64-gnu/-/binding-linux-ppc64-gnu-1.56.0.tgz", - "integrity": "sha512-uqL1kMH3u69/e1CH2EJhP3CP28jw2ExLsku4o8RVAZ7fySo9zOyI2fy9pVlTAp4voBLVgzndXi3SgtdyCTa2aA==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-ppc64-gnu/-/binding-linux-ppc64-gnu-1.58.0.tgz", + "integrity": "sha512-NQ0U/lqxH2/VxBYeAIvMNUK1y0a1bJ3ZicqkF2c6wfakbEciP9jvIE4yNzCFpZaqeIeRYaV7AVGqEO1yrfVPjA==", "cpu": [ "ppc64" ], @@ -926,9 +926,9 @@ } }, "node_modules/@oxlint/binding-linux-riscv64-gnu": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-riscv64-gnu/-/binding-linux-riscv64-gnu-1.56.0.tgz", - "integrity": "sha512-j0CcMBOgV6KsRaBdsebIeiy7hCjEvq2KdEsiULf2LZqAq0v1M1lWjelhCV57LxsqaIGChXFuFJ0RiFrSRHPhSg==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-riscv64-gnu/-/binding-linux-riscv64-gnu-1.58.0.tgz", + "integrity": "sha512-X9J+kr3gIC9FT8GuZt0ekzpNUtkBVzMVU4KiKDSlocyQuEgi3gBbXYN8UkQiV77FTusLDPsovjo95YedHr+3yg==", "cpu": [ "riscv64" ], @@ -943,9 +943,9 @@ } }, "node_modules/@oxlint/binding-linux-riscv64-musl": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-riscv64-musl/-/binding-linux-riscv64-musl-1.56.0.tgz", - "integrity": "sha512-7VDOiL8cDG3DQ/CY3yKjbV1c4YPvc4vH8qW09Vv+5ukq3l/Kcyr6XGCd5NvxUmxqDb2vjMpM+eW/4JrEEsUetA==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-riscv64-musl/-/binding-linux-riscv64-musl-1.58.0.tgz", + "integrity": "sha512-CDze3pi1OO3Wvb/QsXjmLEY4XPKGM6kIo82ssNOgmcl1IdndF9VSGAE38YLhADWmOac7fjqhBw82LozuUVxD0Q==", "cpu": [ "riscv64" ], @@ -960,9 +960,9 @@ } }, "node_modules/@oxlint/binding-linux-s390x-gnu": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-s390x-gnu/-/binding-linux-s390x-gnu-1.56.0.tgz", - "integrity": "sha512-JGRpX0M+ikD3WpwJ7vKcHKV6Kg0dT52BW2Eu2BupXotYeqGXBrbY+QPkAyKO6MNgKozyTNaRh3r7g+VWgyAQYQ==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-s390x-gnu/-/binding-linux-s390x-gnu-1.58.0.tgz", + "integrity": "sha512-b/89glbxFaEAcA6Uf1FvCNecBJEgcUTsV1quzrqXM/o4R1M4u+2KCVuyGCayN2UpsRWtGGLb+Ver0tBBpxaPog==", "cpu": [ "s390x" ], @@ -977,9 +977,9 @@ } }, "node_modules/@oxlint/binding-linux-x64-gnu": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-x64-gnu/-/binding-linux-x64-gnu-1.56.0.tgz", - "integrity": "sha512-dNaICPvtmuxFP/VbqdofrLqdS3bM/AKJN3LMJD52si44ea7Be1cBk6NpfIahaysG9Uo+L98QKddU9CD5L8UHnQ==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-x64-gnu/-/binding-linux-x64-gnu-1.58.0.tgz", + "integrity": "sha512-0/yYpkq9VJFCEcuRlrViGj8pJUFFvNS4EkEREaN7CB1EcLXJIaVSSa5eCihwBGXtOZxhnblWgxks9juRdNQI7w==", "cpu": [ "x64" ], @@ -994,9 +994,9 @@ } }, "node_modules/@oxlint/binding-linux-x64-musl": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-x64-musl/-/binding-linux-x64-musl-1.56.0.tgz", - "integrity": "sha512-pF1vOtM+GuXmbklM1hV8WMsn6tCNPvkUzklj/Ej98JhlanbmA2RB1BILgOpwSuCTRTIYx2MXssmEyQQ90QF5aA==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-linux-x64-musl/-/binding-linux-x64-musl-1.58.0.tgz", + "integrity": "sha512-hr6FNvmcAXiH+JxSvaJ4SJ1HofkdqEElXICW9sm3/Rd5eC3t7kzvmLyRAB3NngKO2wzXRCAm4Z/mGWfrsS4X8w==", "cpu": [ "x64" ], @@ -1011,9 +1011,9 @@ } }, "node_modules/@oxlint/binding-openharmony-arm64": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-openharmony-arm64/-/binding-openharmony-arm64-1.56.0.tgz", - "integrity": "sha512-bp8NQ4RE6fDIFLa4bdBiOA+TAvkNkg+rslR+AvvjlLTYXLy9/uKAYLQudaQouWihLD/hgkrXIKKzXi5IXOewwg==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-openharmony-arm64/-/binding-openharmony-arm64-1.58.0.tgz", + "integrity": "sha512-R+O368VXgRql1K6Xar+FEo7NEwfo13EibPMoTv3sesYQedRXd6m30Dh/7lZMxnrQVFfeo4EOfYIP4FpcgWQNHg==", "cpu": [ "arm64" ], @@ -1028,9 +1028,9 @@ } }, "node_modules/@oxlint/binding-win32-arm64-msvc": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-win32-arm64-msvc/-/binding-win32-arm64-msvc-1.56.0.tgz", - "integrity": "sha512-PxT4OJDfMOQBzo3OlzFb9gkoSD+n8qSBxyVq2wQSZIHFQYGEqIRTo9M0ZStvZm5fdhMqaVYpOnJvH2hUMEDk/g==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-win32-arm64-msvc/-/binding-win32-arm64-msvc-1.58.0.tgz", + "integrity": "sha512-Q0FZiAY/3c4YRj4z3h9K1PgaByrifrfbBoODSeX7gy97UtB7pySPUQfC2B/GbxWU6k7CzQrRy5gME10PltLAFQ==", "cpu": [ "arm64" ], @@ -1045,9 +1045,9 @@ } }, "node_modules/@oxlint/binding-win32-ia32-msvc": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-win32-ia32-msvc/-/binding-win32-ia32-msvc-1.56.0.tgz", - "integrity": "sha512-PTRy6sIEPqy2x8PTP1baBNReN/BNEFmde0L+mYeHmjXE1Vlcc9+I5nsqENsB2yAm5wLkzPoTNCMY/7AnabT4/A==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-win32-ia32-msvc/-/binding-win32-ia32-msvc-1.58.0.tgz", + "integrity": "sha512-Y8FKBABrSPp9H0QkRLHDHOSUgM/309a3IvOVgPcVxYcX70wxJrk608CuTg7w+C6vEd724X5wJoNkBcGYfH7nNQ==", "cpu": [ "ia32" ], @@ -1062,9 +1062,9 @@ } }, "node_modules/@oxlint/binding-win32-x64-msvc": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/@oxlint/binding-win32-x64-msvc/-/binding-win32-x64-msvc-1.56.0.tgz", - "integrity": "sha512-ZHa0clocjLmIDr+1LwoWtxRcoYniAvERotvwKUYKhH41NVfl0Y4LNbyQkwMZzwDvKklKGvGZ5+DAG58/Ik47tQ==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/@oxlint/binding-win32-x64-msvc/-/binding-win32-x64-msvc-1.58.0.tgz", + "integrity": "sha512-bCn5rbiz5My+Bj7M09sDcnqW0QJyINRVxdZ65x1/Y2tGrMwherwK/lpk+HRQCKvXa8pcaQdF5KY5j54VGZLwNg==", "cpu": [ "x64" ], @@ -1865,13 +1865,13 @@ "license": "MIT" }, "node_modules/@types/node": { - "version": "24.12.0", - "resolved": "https://registry.npmjs.org/@types/node/-/node-24.12.0.tgz", - "integrity": "sha512-GYDxsZi3ChgmckRT9HPU0WEhKLP08ev/Yfcq2AstjrDASOYCSXeyjDsHg4v5t4jOj7cyDX3vmprafKlWIG9MXQ==", + "version": "25.5.2", + "resolved": "https://registry.npmjs.org/@types/node/-/node-25.5.2.tgz", + "integrity": "sha512-tO4ZIRKNC+MDWV4qKVZe3Ql/woTnmHDr5JD8UI5hn2pwBrHEwOEMZK7WlNb5RKB6EoJ02gwmQS9OrjuFnZYdpg==", "devOptional": true, "license": "MIT", "dependencies": { - "undici-types": "~7.16.0" + "undici-types": "~7.18.0" } }, "node_modules/@types/web-bluetooth": { @@ -1881,20 +1881,20 @@ "license": "MIT" }, "node_modules/@typescript-eslint/eslint-plugin": { - "version": "8.57.1", - "resolved": "https://registry.npmjs.org/@typescript-eslint/eslint-plugin/-/eslint-plugin-8.57.1.tgz", - "integrity": "sha512-Gn3aqnvNl4NGc6x3/Bqk1AOn0thyTU9bqDRhiRnUWezgvr2OnhYCWCgC8zXXRVqBsIL1pSDt7T9nJUe0oM0kDQ==", + "version": "8.58.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/eslint-plugin/-/eslint-plugin-8.58.1.tgz", + "integrity": "sha512-eSkwoemjo76bdXl2MYqtxg51HNwUSkWfODUOQ3PaTLZGh9uIWWFZIjyjaJnex7wXDu+TRx+ATsnSxdN9YWfRTQ==", "dev": true, "license": "MIT", "dependencies": { "@eslint-community/regexpp": "^4.12.2", - "@typescript-eslint/scope-manager": "8.57.1", - "@typescript-eslint/type-utils": "8.57.1", - "@typescript-eslint/utils": "8.57.1", - "@typescript-eslint/visitor-keys": "8.57.1", + "@typescript-eslint/scope-manager": "8.58.1", + "@typescript-eslint/type-utils": "8.58.1", + "@typescript-eslint/utils": "8.58.1", + "@typescript-eslint/visitor-keys": "8.58.1", "ignore": "^7.0.5", "natural-compare": "^1.4.0", - "ts-api-utils": "^2.4.0" + "ts-api-utils": "^2.5.0" }, "engines": { "node": "^18.18.0 || ^20.9.0 || >=21.1.0" @@ -1904,9 +1904,9 @@ "url": "https://opencollective.com/typescript-eslint" }, "peerDependencies": { - "@typescript-eslint/parser": "^8.57.1", + "@typescript-eslint/parser": "^8.58.1", "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", - "typescript": ">=4.8.4 <6.0.0" + "typescript": ">=4.8.4 <6.1.0" } }, "node_modules/@typescript-eslint/eslint-plugin/node_modules/ignore": { @@ -1920,16 +1920,16 @@ } }, "node_modules/@typescript-eslint/parser": { - "version": "8.57.1", - "resolved": "https://registry.npmjs.org/@typescript-eslint/parser/-/parser-8.57.1.tgz", - "integrity": "sha512-k4eNDan0EIMTT/dUKc/g+rsJ6wcHYhNPdY19VoX/EOtaAG8DLtKCykhrUnuHPYvinn5jhAPgD2Qw9hXBwrahsw==", + "version": "8.58.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/parser/-/parser-8.58.1.tgz", + "integrity": "sha512-gGkiNMPqerb2cJSVcruigx9eHBlLG14fSdPdqMoOcBfh+vvn4iCq2C8MzUB89PrxOXk0y3GZ1yIWb9aOzL93bw==", "dev": true, "license": "MIT", "dependencies": { - "@typescript-eslint/scope-manager": "8.57.1", - "@typescript-eslint/types": "8.57.1", - "@typescript-eslint/typescript-estree": "8.57.1", - "@typescript-eslint/visitor-keys": "8.57.1", + "@typescript-eslint/scope-manager": "8.58.1", + "@typescript-eslint/types": "8.58.1", + "@typescript-eslint/typescript-estree": "8.58.1", + "@typescript-eslint/visitor-keys": "8.58.1", "debug": "^4.4.3" }, "engines": { @@ -1941,18 +1941,18 @@ }, "peerDependencies": { "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", - "typescript": ">=4.8.4 <6.0.0" + "typescript": ">=4.8.4 <6.1.0" } }, "node_modules/@typescript-eslint/project-service": { - "version": "8.57.1", - "resolved": "https://registry.npmjs.org/@typescript-eslint/project-service/-/project-service-8.57.1.tgz", - "integrity": "sha512-vx1F37BRO1OftsYlmG9xay1TqnjNVlqALymwWVuYTdo18XuKxtBpCj1QlzNIEHlvlB27osvXFWptYiEWsVdYsg==", + "version": "8.58.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/project-service/-/project-service-8.58.1.tgz", + "integrity": "sha512-gfQ8fk6cxhtptek+/8ZIqw8YrRW5048Gug8Ts5IYcMLCw18iUgrZAEY/D7s4hkI0FxEfGakKuPK/XUMPzPxi5g==", "dev": true, "license": "MIT", "dependencies": { - "@typescript-eslint/tsconfig-utils": "^8.57.1", - "@typescript-eslint/types": "^8.57.1", + "@typescript-eslint/tsconfig-utils": "^8.58.1", + "@typescript-eslint/types": "^8.58.1", "debug": "^4.4.3" }, "engines": { @@ -1963,18 +1963,18 @@ "url": "https://opencollective.com/typescript-eslint" }, "peerDependencies": { - "typescript": ">=4.8.4 <6.0.0" + "typescript": ">=4.8.4 <6.1.0" } }, "node_modules/@typescript-eslint/scope-manager": { - "version": "8.57.1", - "resolved": "https://registry.npmjs.org/@typescript-eslint/scope-manager/-/scope-manager-8.57.1.tgz", - "integrity": "sha512-hs/QcpCwlwT2L5S+3fT6gp0PabyGk4Q0Rv2doJXA0435/OpnSR3VRgvrp8Xdoc3UAYSg9cyUjTeFXZEPg/3OKg==", + "version": "8.58.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/scope-manager/-/scope-manager-8.58.1.tgz", + "integrity": "sha512-TPYUEqJK6avLcEjumWsIuTpuYODTTDAtoMdt8ZZa93uWMTX13Nb8L5leSje1NluammvU+oI3QRr5lLXPgihX3w==", "dev": true, "license": "MIT", "dependencies": { - "@typescript-eslint/types": "8.57.1", - "@typescript-eslint/visitor-keys": "8.57.1" + "@typescript-eslint/types": "8.58.1", + "@typescript-eslint/visitor-keys": "8.58.1" }, "engines": { "node": "^18.18.0 || ^20.9.0 || >=21.1.0" @@ -1985,9 +1985,9 @@ } }, "node_modules/@typescript-eslint/tsconfig-utils": { - "version": "8.57.1", - "resolved": "https://registry.npmjs.org/@typescript-eslint/tsconfig-utils/-/tsconfig-utils-8.57.1.tgz", - "integrity": "sha512-0lgOZB8cl19fHO4eI46YUx2EceQqhgkPSuCGLlGi79L2jwYY1cxeYc1Nae8Aw1xjgW3PKVDLlr3YJ6Bxx8HkWg==", + "version": "8.58.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/tsconfig-utils/-/tsconfig-utils-8.58.1.tgz", + "integrity": "sha512-JAr2hOIct2Q+qk3G+8YFfqkqi7sC86uNryT+2i5HzMa2MPjw4qNFvtjnw1IiA1rP7QhNKVe21mSSLaSjwA1Olw==", "dev": true, "license": "MIT", "engines": { @@ -1998,21 +1998,21 @@ "url": "https://opencollective.com/typescript-eslint" }, "peerDependencies": { - "typescript": ">=4.8.4 <6.0.0" + "typescript": ">=4.8.4 <6.1.0" } }, "node_modules/@typescript-eslint/type-utils": { - "version": "8.57.1", - "resolved": "https://registry.npmjs.org/@typescript-eslint/type-utils/-/type-utils-8.57.1.tgz", - "integrity": "sha512-+Bwwm0ScukFdyoJsh2u6pp4S9ktegF98pYUU0hkphOOqdMB+1sNQhIz8y5E9+4pOioZijrkfNO/HUJVAFFfPKA==", + "version": "8.58.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/type-utils/-/type-utils-8.58.1.tgz", + "integrity": "sha512-HUFxvTJVroT+0rXVJC7eD5zol6ID+Sn5npVPWoFuHGg9Ncq5Q4EYstqR+UOqaNRFXi5TYkpXXkLhoCHe3G0+7w==", "dev": true, "license": "MIT", "dependencies": { - "@typescript-eslint/types": "8.57.1", - "@typescript-eslint/typescript-estree": "8.57.1", - "@typescript-eslint/utils": "8.57.1", + "@typescript-eslint/types": "8.58.1", + "@typescript-eslint/typescript-estree": "8.58.1", + "@typescript-eslint/utils": "8.58.1", "debug": "^4.4.3", - "ts-api-utils": "^2.4.0" + "ts-api-utils": "^2.5.0" }, "engines": { "node": "^18.18.0 || ^20.9.0 || >=21.1.0" @@ -2023,13 +2023,13 @@ }, "peerDependencies": { "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", - "typescript": ">=4.8.4 <6.0.0" + "typescript": ">=4.8.4 <6.1.0" } }, "node_modules/@typescript-eslint/types": { - "version": "8.57.1", - "resolved": "https://registry.npmjs.org/@typescript-eslint/types/-/types-8.57.1.tgz", - "integrity": "sha512-S29BOBPJSFUiblEl6RzPPjJt6w25A6XsBqRVDt53tA/tlL8q7ceQNZHTjPeONt/3S7KRI4quk+yP9jK2WjBiPQ==", + "version": "8.58.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/types/-/types-8.58.1.tgz", + "integrity": "sha512-io/dV5Aw5ezwzfPBBWLoT+5QfVtP8O7q4Kftjn5azJ88bYyp/ZMCsyW1lpKK46EXJcaYMZ1JtYj+s/7TdzmQMw==", "dev": true, "license": "MIT", "engines": { @@ -2041,21 +2041,21 @@ } }, "node_modules/@typescript-eslint/typescript-estree": { - "version": "8.57.1", - "resolved": "https://registry.npmjs.org/@typescript-eslint/typescript-estree/-/typescript-estree-8.57.1.tgz", - "integrity": "sha512-ybe2hS9G6pXpqGtPli9Gx9quNV0TWLOmh58ADlmZe9DguLq0tiAKVjirSbtM1szG6+QH6rVXyU6GTLQbWnMY+g==", + "version": "8.58.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/typescript-estree/-/typescript-estree-8.58.1.tgz", + "integrity": "sha512-w4w7WR7GHOjqqPnvAYbazq+Y5oS68b9CzasGtnd6jIeOIeKUzYzupGTB2T4LTPSv4d+WPeccbxuneTFHYgAAWg==", "dev": true, "license": "MIT", "dependencies": { - "@typescript-eslint/project-service": "8.57.1", - "@typescript-eslint/tsconfig-utils": "8.57.1", - "@typescript-eslint/types": "8.57.1", - "@typescript-eslint/visitor-keys": "8.57.1", + "@typescript-eslint/project-service": "8.58.1", + "@typescript-eslint/tsconfig-utils": "8.58.1", + "@typescript-eslint/types": "8.58.1", + "@typescript-eslint/visitor-keys": "8.58.1", "debug": "^4.4.3", "minimatch": "^10.2.2", "semver": "^7.7.3", "tinyglobby": "^0.2.15", - "ts-api-utils": "^2.4.0" + "ts-api-utils": "^2.5.0" }, "engines": { "node": "^18.18.0 || ^20.9.0 || >=21.1.0" @@ -2065,20 +2065,20 @@ "url": "https://opencollective.com/typescript-eslint" }, "peerDependencies": { - "typescript": ">=4.8.4 <6.0.0" + "typescript": ">=4.8.4 <6.1.0" } }, "node_modules/@typescript-eslint/utils": { - "version": "8.57.1", - "resolved": "https://registry.npmjs.org/@typescript-eslint/utils/-/utils-8.57.1.tgz", - "integrity": "sha512-XUNSJ/lEVFttPMMoDVA2r2bwrl8/oPx8cURtczkSEswY5T3AeLmCy+EKWQNdL4u0MmAHOjcWrqJp2cdvgjn8dQ==", + "version": "8.58.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/utils/-/utils-8.58.1.tgz", + "integrity": "sha512-Ln8R0tmWC7pTtLOzgJzYTXSCjJ9rDNHAqTaVONF4FEi2qwce8mD9iSOxOpLFFvWp/wBFlew0mjM1L1ihYWfBdQ==", "dev": true, "license": "MIT", "dependencies": { "@eslint-community/eslint-utils": "^4.9.1", - "@typescript-eslint/scope-manager": "8.57.1", - "@typescript-eslint/types": "8.57.1", - "@typescript-eslint/typescript-estree": "8.57.1" + "@typescript-eslint/scope-manager": "8.58.1", + "@typescript-eslint/types": "8.58.1", + "@typescript-eslint/typescript-estree": "8.58.1" }, "engines": { "node": "^18.18.0 || ^20.9.0 || >=21.1.0" @@ -2089,17 +2089,17 @@ }, "peerDependencies": { "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", - "typescript": ">=4.8.4 <6.0.0" + "typescript": ">=4.8.4 <6.1.0" } }, "node_modules/@typescript-eslint/visitor-keys": { - "version": "8.57.1", - "resolved": "https://registry.npmjs.org/@typescript-eslint/visitor-keys/-/visitor-keys-8.57.1.tgz", - "integrity": "sha512-YWnmJkXbofiz9KbnbbwuA2rpGkFPLbAIetcCNO6mJ8gdhdZ/v7WDXsoGFAJuM6ikUFKTlSQnjWnVO4ux+UzS6A==", + "version": "8.58.1", + "resolved": "https://registry.npmjs.org/@typescript-eslint/visitor-keys/-/visitor-keys-8.58.1.tgz", + "integrity": "sha512-y+vH7QE8ycjoa0bWciFg7OpFcipUuem1ujhrdLtq1gByKwfbC7bPeKsiny9e0urg93DqwGcHey+bGRKCnF1nZQ==", "dev": true, "license": "MIT", "dependencies": { - "@typescript-eslint/types": "8.57.1", + "@typescript-eslint/types": "8.58.1", "eslint-visitor-keys": "^5.0.0" }, "engines": { @@ -2351,39 +2351,39 @@ } }, "node_modules/@vue/compiler-core": { - "version": "3.5.30", - "resolved": "https://registry.npmjs.org/@vue/compiler-core/-/compiler-core-3.5.30.tgz", - "integrity": "sha512-s3DfdZkcu/qExZ+td75015ljzHc6vE+30cFMGRPROYjqkroYI5NV2X1yAMX9UeyBNWB9MxCfPcsjpLS11nzkkw==", + "version": "3.5.32", + "resolved": "https://registry.npmjs.org/@vue/compiler-core/-/compiler-core-3.5.32.tgz", + "integrity": "sha512-4x74Tbtqnda8s/NSD6e1Dr5p1c8HdMU5RWSjMSUzb8RTcUQqevDCxVAitcLBKT+ie3o0Dl9crc/S/opJM7qBGQ==", "license": "MIT", "dependencies": { - "@babel/parser": "^7.29.0", - "@vue/shared": "3.5.30", + "@babel/parser": "^7.29.2", + "@vue/shared": "3.5.32", "entities": "^7.0.1", "estree-walker": "^2.0.2", "source-map-js": "^1.2.1" } }, "node_modules/@vue/compiler-dom": { - "version": "3.5.30", - "resolved": "https://registry.npmjs.org/@vue/compiler-dom/-/compiler-dom-3.5.30.tgz", - "integrity": "sha512-eCFYESUEVYHhiMuK4SQTldO3RYxyMR/UQL4KdGD1Yrkfdx4m/HYuZ9jSfPdA+nWJY34VWndiYdW/wZXyiPEB9g==", + "version": "3.5.32", + "resolved": "https://registry.npmjs.org/@vue/compiler-dom/-/compiler-dom-3.5.32.tgz", + "integrity": "sha512-ybHAu70NtiEI1fvAUz3oXZqkUYEe5J98GjMDpTGl5iHb0T15wQYLR4wE3h9xfuTNA+Cm2f4czfe8B4s+CCH57Q==", "license": "MIT", "dependencies": { - "@vue/compiler-core": "3.5.30", - "@vue/shared": "3.5.30" + "@vue/compiler-core": "3.5.32", + "@vue/shared": "3.5.32" } }, "node_modules/@vue/compiler-sfc": { - "version": "3.5.30", - "resolved": "https://registry.npmjs.org/@vue/compiler-sfc/-/compiler-sfc-3.5.30.tgz", - "integrity": "sha512-LqmFPDn89dtU9vI3wHJnwaV6GfTRD87AjWpTWpyrdVOObVtjIuSeZr181z5C4PmVx/V3j2p+0f7edFKGRMpQ5A==", + "version": "3.5.32", + "resolved": "https://registry.npmjs.org/@vue/compiler-sfc/-/compiler-sfc-3.5.32.tgz", + "integrity": "sha512-8UYUYo71cP/0YHMO814TRZlPuUUw3oifHuMR7Wp9SNoRSrxRQnhMLNlCeaODNn6kNTJsjFoQ/kqIj4qGvya4Xg==", "license": "MIT", "dependencies": { - "@babel/parser": "^7.29.0", - "@vue/compiler-core": "3.5.30", - "@vue/compiler-dom": "3.5.30", - "@vue/compiler-ssr": "3.5.30", - "@vue/shared": "3.5.30", + "@babel/parser": "^7.29.2", + "@vue/compiler-core": "3.5.32", + "@vue/compiler-dom": "3.5.32", + "@vue/compiler-ssr": "3.5.32", + "@vue/shared": "3.5.32", "estree-walker": "^2.0.2", "magic-string": "^0.30.21", "postcss": "^8.5.8", @@ -2391,13 +2391,13 @@ } }, "node_modules/@vue/compiler-ssr": { - "version": "3.5.30", - "resolved": "https://registry.npmjs.org/@vue/compiler-ssr/-/compiler-ssr-3.5.30.tgz", - "integrity": "sha512-NsYK6OMTnx109PSL2IAyf62JP6EUdk4Dmj6AkWcJGBvN0dQoMYtVekAmdqgTtWQgEJo+Okstbf/1p7qZr5H+bA==", + "version": "3.5.32", + "resolved": "https://registry.npmjs.org/@vue/compiler-ssr/-/compiler-ssr-3.5.32.tgz", + "integrity": "sha512-Gp4gTs22T3DgRotZ8aA/6m2jMR+GMztvBXUBEUOYOcST+giyGWJ4WvFd7QLHBkzTxkfOt8IELKNdpzITLbA2rw==", "license": "MIT", "dependencies": { - "@vue/compiler-dom": "3.5.30", - "@vue/shared": "3.5.30" + "@vue/compiler-dom": "3.5.32", + "@vue/shared": "3.5.32" } }, "node_modules/@vue/devtools-api": { @@ -2489,53 +2489,53 @@ } }, "node_modules/@vue/reactivity": { - "version": "3.5.30", - "resolved": "https://registry.npmjs.org/@vue/reactivity/-/reactivity-3.5.30.tgz", - "integrity": "sha512-179YNgKATuwj9gB+66snskRDOitDiuOZqkYia7mHKJaidOMo/WJxHKF8DuGc4V4XbYTJANlfEKb0yxTQotnx4Q==", + "version": "3.5.32", + "resolved": "https://registry.npmjs.org/@vue/reactivity/-/reactivity-3.5.32.tgz", + "integrity": "sha512-/ORasxSGvZ6MN5gc+uE364SxFdJ0+WqVG0CENXaGW58TOCdrAW76WWaplDtECeS1qphvtBZtR+3/o1g1zL4xPQ==", "license": "MIT", "dependencies": { - "@vue/shared": "3.5.30" + "@vue/shared": "3.5.32" } }, "node_modules/@vue/runtime-core": { - "version": "3.5.30", - "resolved": "https://registry.npmjs.org/@vue/runtime-core/-/runtime-core-3.5.30.tgz", - "integrity": "sha512-e0Z+8PQsUTdwV8TtEsLzUM7SzC7lQwYKePydb7K2ZnmS6jjND+WJXkmmfh/swYzRyfP1EY3fpdesyYoymCzYfg==", + "version": "3.5.32", + "resolved": "https://registry.npmjs.org/@vue/runtime-core/-/runtime-core-3.5.32.tgz", + "integrity": "sha512-pDrXCejn4UpFDFmMd27AcJEbHaLemaE5o4pbb7sLk79SRIhc6/t34BQA7SGNgYtbMnvbF/HHOftYBgFJtUoJUQ==", "license": "MIT", "dependencies": { - "@vue/reactivity": "3.5.30", - "@vue/shared": "3.5.30" + "@vue/reactivity": "3.5.32", + "@vue/shared": "3.5.32" } }, "node_modules/@vue/runtime-dom": { - "version": "3.5.30", - "resolved": "https://registry.npmjs.org/@vue/runtime-dom/-/runtime-dom-3.5.30.tgz", - "integrity": "sha512-2UIGakjU4WSQ0T4iwDEW0W7vQj6n7AFn7taqZ9Cvm0Q/RA2FFOziLESrDL4GmtI1wV3jXg5nMoJSYO66egDUBw==", + "version": "3.5.32", + "resolved": "https://registry.npmjs.org/@vue/runtime-dom/-/runtime-dom-3.5.32.tgz", + "integrity": "sha512-1CDVv7tv/IV13V8Nip1k/aaObVbWqRlVCVezTwx3K07p7Vxossp5JU1dcPNhJk3w347gonIUT9jQOGutyJrSVQ==", "license": "MIT", "dependencies": { - "@vue/reactivity": "3.5.30", - "@vue/runtime-core": "3.5.30", - "@vue/shared": "3.5.30", + "@vue/reactivity": "3.5.32", + "@vue/runtime-core": "3.5.32", + "@vue/shared": "3.5.32", "csstype": "^3.2.3" } }, "node_modules/@vue/server-renderer": { - "version": "3.5.30", - "resolved": "https://registry.npmjs.org/@vue/server-renderer/-/server-renderer-3.5.30.tgz", - "integrity": "sha512-v+R34icapydRwbZRD0sXwtHqrQJv38JuMB4JxbOxd8NEpGLny7cncMp53W9UH/zo4j8eDHjQ1dEJXwzFQknjtQ==", + "version": "3.5.32", + "resolved": "https://registry.npmjs.org/@vue/server-renderer/-/server-renderer-3.5.32.tgz", + "integrity": "sha512-IOjm2+JQwRFS7W28HNuJeXQle9KdZbODFY7hFGVtnnghF51ta20EWAZJHX+zLGtsHhaU6uC9BGPV52KVpYryMQ==", "license": "MIT", "dependencies": { - "@vue/compiler-ssr": "3.5.30", - "@vue/shared": "3.5.30" + "@vue/compiler-ssr": "3.5.32", + "@vue/shared": "3.5.32" }, "peerDependencies": { - "vue": "3.5.30" + "vue": "3.5.32" } }, "node_modules/@vue/shared": { - "version": "3.5.30", - "resolved": "https://registry.npmjs.org/@vue/shared/-/shared-3.5.30.tgz", - "integrity": "sha512-YXgQ7JjaO18NeK2K9VTbDHaFy62WrObMa6XERNfNOkAhD1F1oDSf3ZJ7K6GqabZ0BvSDHajp8qfS5Sa2I9n8uQ==", + "version": "3.5.32", + "resolved": "https://registry.npmjs.org/@vue/shared/-/shared-3.5.32.tgz", + "integrity": "sha512-ksNyrmRQzWJJ8n3cRDuSF7zNNontuJg1YHnmWRJd2AMu8Ij2bqwiiri2lH5rHtYPZjj4STkNcgcmiQqlOjiYGg==", "license": "MIT" }, "node_modules/@vue/test-utils": { @@ -2550,13 +2550,13 @@ } }, "node_modules/@vue/tsconfig": { - "version": "0.9.0", - "resolved": "https://registry.npmjs.org/@vue/tsconfig/-/tsconfig-0.9.0.tgz", - "integrity": "sha512-RP+v9Cpbsk1ZVXltCHHkYBr7+624x6gcijJXVjIcsYk7JXqvIpRtMwU2ARLvWDhmy9ffdFYxhsfJnPztADBohQ==", + "version": "0.9.1", + "resolved": "https://registry.npmjs.org/@vue/tsconfig/-/tsconfig-0.9.1.tgz", + "integrity": "sha512-buvjm+9NzLCJL29KY1j1991YYJ5e6275OiK+G4jtmfIb+z4POywbdm0wXusT9adVWqe0xqg70TbI7+mRx4uU9w==", "dev": true, "license": "MIT", "peerDependencies": { - "typescript": "5.x", + "typescript": ">= 5.8", "vue": "^3.4.0" }, "peerDependenciesMeta": { @@ -3263,13 +3263,16 @@ } }, "node_modules/eslint-plugin-oxlint": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/eslint-plugin-oxlint/-/eslint-plugin-oxlint-1.56.0.tgz", - "integrity": "sha512-s47/OjE4cfQ+CD4eA38g+5axvwuyswY5H6acCdVGIvowYuLVJ6zrR7N260XfVVLRuyjjPO9L77qNYwSbmRNyuw==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/eslint-plugin-oxlint/-/eslint-plugin-oxlint-1.58.0.tgz", + "integrity": "sha512-L3aZSg0x2fL0dXyOgoK8A1QUbnfGzXt6bX4AFD7Scauw6zVUBOZrES5eRTzLLGgeVg0el5lvqHGl1WFAGo14DA==", "dev": true, "license": "MIT", "dependencies": { "jsonc-parser": "^3.3.1" + }, + "peerDependencies": { + "oxlint": "~1.58.0" } }, "node_modules/eslint-plugin-vue": { @@ -4702,9 +4705,9 @@ } }, "node_modules/oxlint": { - "version": "1.56.0", - "resolved": "https://registry.npmjs.org/oxlint/-/oxlint-1.56.0.tgz", - "integrity": "sha512-Q+5Mj5PVaH/R6/fhMMFzw4dT+KPB+kQW4kaL8FOIq7tfhlnEVp6+3lcWqFruuTNlUo9srZUW3qH7Id4pskeR6g==", + "version": "1.58.0", + "resolved": "https://registry.npmjs.org/oxlint/-/oxlint-1.58.0.tgz", + "integrity": "sha512-t4s9leczDMqlvOSjnbCQe7gtoLkWgBGZ7sBdCJ9EOj5IXFSG/X7OAzK4yuH4iW+4cAYe8kLFbC8tuYMwWZm+Cg==", "dev": true, "license": "MIT", "bin": { @@ -4717,28 +4720,28 @@ "url": "https://github.com/sponsors/Boshen" }, "optionalDependencies": { - "@oxlint/binding-android-arm-eabi": "1.56.0", - "@oxlint/binding-android-arm64": "1.56.0", - "@oxlint/binding-darwin-arm64": "1.56.0", - "@oxlint/binding-darwin-x64": "1.56.0", - "@oxlint/binding-freebsd-x64": "1.56.0", - "@oxlint/binding-linux-arm-gnueabihf": "1.56.0", - "@oxlint/binding-linux-arm-musleabihf": "1.56.0", - "@oxlint/binding-linux-arm64-gnu": "1.56.0", - "@oxlint/binding-linux-arm64-musl": "1.56.0", - "@oxlint/binding-linux-ppc64-gnu": "1.56.0", - "@oxlint/binding-linux-riscv64-gnu": "1.56.0", - "@oxlint/binding-linux-riscv64-musl": "1.56.0", - "@oxlint/binding-linux-s390x-gnu": "1.56.0", - "@oxlint/binding-linux-x64-gnu": "1.56.0", - "@oxlint/binding-linux-x64-musl": "1.56.0", - "@oxlint/binding-openharmony-arm64": "1.56.0", - "@oxlint/binding-win32-arm64-msvc": "1.56.0", - "@oxlint/binding-win32-ia32-msvc": "1.56.0", - "@oxlint/binding-win32-x64-msvc": "1.56.0" + "@oxlint/binding-android-arm-eabi": "1.58.0", + "@oxlint/binding-android-arm64": "1.58.0", + "@oxlint/binding-darwin-arm64": "1.58.0", + "@oxlint/binding-darwin-x64": "1.58.0", + "@oxlint/binding-freebsd-x64": "1.58.0", + "@oxlint/binding-linux-arm-gnueabihf": "1.58.0", + "@oxlint/binding-linux-arm-musleabihf": "1.58.0", + "@oxlint/binding-linux-arm64-gnu": "1.58.0", + "@oxlint/binding-linux-arm64-musl": "1.58.0", + "@oxlint/binding-linux-ppc64-gnu": "1.58.0", + "@oxlint/binding-linux-riscv64-gnu": "1.58.0", + "@oxlint/binding-linux-riscv64-musl": "1.58.0", + "@oxlint/binding-linux-s390x-gnu": "1.58.0", + "@oxlint/binding-linux-x64-gnu": "1.58.0", + "@oxlint/binding-linux-x64-musl": "1.58.0", + "@oxlint/binding-openharmony-arm64": "1.58.0", + "@oxlint/binding-win32-arm64-msvc": "1.58.0", + "@oxlint/binding-win32-ia32-msvc": "1.58.0", + "@oxlint/binding-win32-x64-msvc": "1.58.0" }, "peerDependencies": { - "oxlint-tsgolint": ">=0.15.0" + "oxlint-tsgolint": ">=0.18.0" }, "peerDependenciesMeta": { "oxlint-tsgolint": { @@ -5109,9 +5112,9 @@ } }, "node_modules/reka-ui": { - "version": "2.9.2", - "resolved": "https://registry.npmjs.org/reka-ui/-/reka-ui-2.9.2.tgz", - "integrity": "sha512-/t4e6y1hcG+uDuRfpg6tbMz3uUEvRzNco6NeYTufoJeUghy5Iosxos5YL/p+ieAsid84sdMX9OrgDqpEuCJhBw==", + "version": "2.9.5", + "resolved": "https://registry.npmjs.org/reka-ui/-/reka-ui-2.9.5.tgz", + "integrity": "sha512-6cZGIMgEeslpFLJ7IihaCSMPp1cJgl2eDkZ2vBMdl+HPUVBaV/iDPMWu3abT2KUkj1lir+oyHq5KelOTT9OheQ==", "license": "MIT", "dependencies": { "@floating-ui/dom": "^1.6.13", @@ -5122,7 +5125,7 @@ "@vueuse/core": "^14.1.0", "@vueuse/shared": "^14.1.0", "aria-hidden": "^1.2.4", - "defu": "^6.1.4", + "defu": "^6.1.5", "ohash": "^2.0.11" }, "funding": { @@ -5640,9 +5643,9 @@ } }, "node_modules/ts-api-utils": { - "version": "2.4.0", - "resolved": "https://registry.npmjs.org/ts-api-utils/-/ts-api-utils-2.4.0.tgz", - "integrity": "sha512-3TaVTaAv2gTiMB35i3FiGJaRfwb3Pyn/j3m/bfAvGe8FB7CF6u+LMYqYlDh7reQf7UNvoTvdfAqHGmPGOSsPmA==", + "version": "2.5.0", + "resolved": "https://registry.npmjs.org/ts-api-utils/-/ts-api-utils-2.5.0.tgz", + "integrity": "sha512-OJ/ibxhPlqrMM0UiNHJ/0CKQkoKF243/AEmplt3qpRgkW8VG7IfOS41h7V8TjITqdByHzrjcS/2si+y4lIh8NA==", "dev": true, "license": "MIT", "engines": { @@ -5709,16 +5712,16 @@ } }, "node_modules/typescript-eslint": { - "version": "8.57.1", - "resolved": "https://registry.npmjs.org/typescript-eslint/-/typescript-eslint-8.57.1.tgz", - "integrity": "sha512-fLvZWf+cAGw3tqMCYzGIU6yR8K+Y9NT2z23RwOjlNFF2HwSB3KhdEFI5lSBv8tNmFkkBShSjsCjzx1vahZfISA==", + "version": "8.58.1", + "resolved": "https://registry.npmjs.org/typescript-eslint/-/typescript-eslint-8.58.1.tgz", + "integrity": "sha512-gf6/oHChByg9HJvhMO1iBexJh12AqqTfnuxscMDOVqfJW3htsdRJI/GfPpHTTcyeB8cSTUY2JcZmVgoyPqcrDg==", "dev": true, "license": "MIT", "dependencies": { - "@typescript-eslint/eslint-plugin": "8.57.1", - "@typescript-eslint/parser": "8.57.1", - "@typescript-eslint/typescript-estree": "8.57.1", - "@typescript-eslint/utils": "8.57.1" + "@typescript-eslint/eslint-plugin": "8.58.1", + "@typescript-eslint/parser": "8.58.1", + "@typescript-eslint/typescript-estree": "8.58.1", + "@typescript-eslint/utils": "8.58.1" }, "engines": { "node": "^18.18.0 || ^20.9.0 || >=21.1.0" @@ -5729,7 +5732,7 @@ }, "peerDependencies": { "eslint": "^8.57.0 || ^9.0.0 || ^10.0.0", - "typescript": ">=4.8.4 <6.0.0" + "typescript": ">=4.8.4 <6.1.0" } }, "node_modules/ufo": { @@ -5749,9 +5752,9 @@ } }, "node_modules/undici-types": { - "version": "7.16.0", - "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-7.16.0.tgz", - "integrity": "sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw==", + "version": "7.18.2", + "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-7.18.2.tgz", + "integrity": "sha512-AsuCzffGHJybSaRrmr5eHr81mwJU3kjw6M+uprWvCXiNeN9SOGwQ3Jn8jb8m3Z6izVgknn1R0FTCEAP2QrLY/w==", "devOptional": true, "license": "MIT" }, @@ -6025,16 +6028,16 @@ "license": "MIT" }, "node_modules/vue": { - "version": "3.5.30", - "resolved": "https://registry.npmjs.org/vue/-/vue-3.5.30.tgz", - "integrity": "sha512-hTHLc6VNZyzzEH/l7PFGjpcTvUgiaPK5mdLkbjrTeWSRcEfxFrv56g/XckIYlE9ckuobsdwqd5mk2g1sBkMewg==", + "version": "3.5.32", + "resolved": "https://registry.npmjs.org/vue/-/vue-3.5.32.tgz", + "integrity": "sha512-vM4z4Q9tTafVfMAK7IVzmxg34rSzTFMyIe0UUEijUCkn9+23lj0WRfA83dg7eQZIUlgOSGrkViIaCfqSAUXsMw==", "license": "MIT", "dependencies": { - "@vue/compiler-dom": "3.5.30", - "@vue/compiler-sfc": "3.5.30", - "@vue/runtime-dom": "3.5.30", - "@vue/server-renderer": "3.5.30", - "@vue/shared": "3.5.30" + "@vue/compiler-dom": "3.5.32", + "@vue/compiler-sfc": "3.5.32", + "@vue/runtime-dom": "3.5.32", + "@vue/server-renderer": "3.5.32", + "@vue/shared": "3.5.32" }, "peerDependencies": { "typescript": "*" diff --git a/web/package.json b/web/package.json index 33c826a4..436ed838 100644 --- a/web/package.json +++ b/web/package.json @@ -25,16 +25,16 @@ "lucide-vue-next": "^0.577.0", "openapi-fetch": "^0.17.0", "pinia": "^3.0.4", - "reka-ui": "^2.9.1", + "reka-ui": "^2.9.3", "tailwind-merge": "^3.5.0", "tailwindcss": "^4.2.1", - "vue": "^3.5.30", + "vue": "^3.5.32", "vue-router": "^5.0.4", "vue-sonner": "^2.0.9" }, "devDependencies": { "@tsconfig/node24": "^24.0.4", - "@types/node": "^24.11.0", + "@types/node": "^25.5.2", "@vitejs/plugin-vue": "^6.0.4", "@vitest/eslint-plugin": "^1.6.13", "@vue/eslint-config-typescript": "^14.7.0", @@ -42,13 +42,13 @@ "@vue/tsconfig": "^0.9.0", "eslint": "^10.1.0", "eslint-config-prettier": "^10.1.8", - "eslint-plugin-oxlint": "~1.56.0", + "eslint-plugin-oxlint": "~1.58.0", "eslint-plugin-vue": "~10.8.0", "jiti": "^2.6.1", "jsdom": "^29.0.0", "npm-run-all2": "^8.0.4", "openapi-typescript": "^7.13.0", - "oxlint": "~1.56.0", + "oxlint": "~1.58.0", "prettier": "3.8.1", "tw-animate-css": "^1.4.0", "typescript": "~5.9.3", diff --git a/web/src/components/__tests__/AppSidebar.test.ts b/web/src/components/__tests__/AppSidebar.test.ts index 04696d2e..fdef46e2 100644 --- a/web/src/components/__tests__/AppSidebar.test.ts +++ b/web/src/components/__tests__/AppSidebar.test.ts @@ -8,8 +8,8 @@ import { useAuthStore } from '@/stores/auth' // Mock vue-router vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ path: '/cves' })), - useRouter: vi.fn(() => ({ push: vi.fn() })), + useRoute: vi.fn<() => unknown>(() => ({ path: '/cves' })), + useRouter: vi.fn<() => unknown>(() => ({ push: vi.fn<(...args: unknown[]) => unknown>() })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -20,8 +20,8 @@ vi.mock('vue-router', () => ({ // Mock API client (needed by auth store) vi.mock('@/lib/api/client', () => ({ default: { - GET: vi.fn(), - POST: vi.fn(), + GET: vi.fn<(...args: unknown[]) => unknown>(), + POST: vi.fn<(...args: unknown[]) => unknown>(), }, })) @@ -126,7 +126,7 @@ describe('OrgSwitcher', () => { user_id: 'u1', email: 'sam@example.com', display_name: 'Sam Carter', - is_site_admin: false, + is_site_admin: false, orgs: [ { org_id: 'org-1', name: 'Acme Corp', role: 'owner' }, { org_id: 'org-2', name: 'Globex Inc', role: 'member' }, diff --git a/web/src/components/cve/__tests__/CveResultsTable.test.ts b/web/src/components/cve/__tests__/CveResultsTable.test.ts index bc46c910..956aca4f 100644 --- a/web/src/components/cve/__tests__/CveResultsTable.test.ts +++ b/web/src/components/cve/__tests__/CveResultsTable.test.ts @@ -8,8 +8,8 @@ import type { components } from '@/lib/api/schema' type CVEItem = components['schemas']['CVEItem'] vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ query: {} })), - useRouter: vi.fn(() => ({ push: vi.fn() })), + useRoute: vi.fn<() => unknown>(() => ({ query: {} })), + useRouter: vi.fn<() => unknown>(() => ({ push: vi.fn<(...args: unknown[]) => unknown>() })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -20,7 +20,8 @@ vi.mock('vue-router', () => ({ function makeCVE(overrides: Partial = {}): CVEItem { return { cve_id: 'CVE-2024-12345', - description_primary: 'A critical vulnerability in Apache Log4j allows remote code execution via crafted log messages.', + description_primary: + 'A critical vulnerability in Apache Log4j allows remote code execution via crafted log messages.', cvss_v3_score: 9.8, epss_score: 0.975, severity: 'critical', @@ -133,7 +134,7 @@ describe('CveResultsTable', () => { const wrapper = await mountTable({ items }) const cells = wrapper.findAll('td') - const epssCell = cells.find(c => c.text() === '\u2014') + const epssCell = cells.find((c) => c.text() === '\u2014') expect(epssCell).toBeDefined() }) }) diff --git a/web/src/components/cve/__tests__/CveSearchFilters.test.ts b/web/src/components/cve/__tests__/CveSearchFilters.test.ts index 9d2c4056..c41aa524 100644 --- a/web/src/components/cve/__tests__/CveSearchFilters.test.ts +++ b/web/src/components/cve/__tests__/CveSearchFilters.test.ts @@ -5,8 +5,8 @@ import { describe, it, expect, vi } from 'vitest' import { mount } from '@vue/test-utils' vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ query: {} })), - useRouter: vi.fn(() => ({ push: vi.fn() })), + useRoute: vi.fn<() => unknown>(() => ({ query: {} })), + useRouter: vi.fn<() => unknown>(() => ({ push: vi.fn<(...args: unknown[]) => unknown>() })), RouterLink: { name: 'RouterLink', props: ['to'], diff --git a/web/src/components/cve/__tests__/CveSourceComparison.test.ts b/web/src/components/cve/__tests__/CveSourceComparison.test.ts index 7898396b..ad3678ec 100644 --- a/web/src/components/cve/__tests__/CveSourceComparison.test.ts +++ b/web/src/components/cve/__tests__/CveSourceComparison.test.ts @@ -8,8 +8,8 @@ import type { components } from '@/lib/api/schema' type CVESourceResponse = components['schemas']['CVESourceResponse'] vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ query: {} })), - useRouter: vi.fn(() => ({ push: vi.fn() })), + useRoute: vi.fn<() => unknown>(() => ({ query: {} })), + useRouter: vi.fn<() => unknown>(() => ({ push: vi.fn<(...args: unknown[]) => unknown>() })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -34,9 +34,7 @@ function makeSource(overrides: Partial = {}): CVESourceRespon } async function mountComponent(props: Record = {}) { - const { default: CveSourceComparison } = await import( - '@/components/cve/CveSourceComparison.vue' - ) + const { default: CveSourceComparison } = await import('@/components/cve/CveSourceComparison.vue') return mount(CveSourceComparison, { props: props as any }) } diff --git a/web/src/components/settings/__tests__/GroupDialog.test.ts b/web/src/components/settings/__tests__/GroupDialog.test.ts index 765a02c0..3c97ab64 100644 --- a/web/src/components/settings/__tests__/GroupDialog.test.ts +++ b/web/src/components/settings/__tests__/GroupDialog.test.ts @@ -8,8 +8,8 @@ import { useAuthStore } from '@/stores/auth' import type { GroupEntry } from '@/components/settings/GroupDialog.vue' vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ params: {} })), - useRouter: vi.fn(() => ({ push: vi.fn() })), + useRoute: vi.fn<() => unknown>(() => ({ params: {} })), + useRouter: vi.fn<() => unknown>(() => ({ push: vi.fn<(...args: unknown[]) => unknown>() })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -17,15 +17,15 @@ vi.mock('vue-router', () => ({ }, })) -const mockPOST = vi.fn() -const mockPATCH = vi.fn() +const mockPOST = vi.fn<(...args: unknown[]) => unknown>() +const mockPATCH = vi.fn<(...args: unknown[]) => unknown>() vi.mock('@/lib/api/client', () => ({ default: { - GET: vi.fn(), + GET: vi.fn<(...args: unknown[]) => unknown>(), POST: (...args: unknown[]) => mockPOST(...args), PATCH: (...args: unknown[]) => mockPATCH(...args), - DELETE: vi.fn(), + DELETE: vi.fn<(...args: unknown[]) => unknown>(), }, })) diff --git a/web/src/components/settings/__tests__/GroupMembersDialog.test.ts b/web/src/components/settings/__tests__/GroupMembersDialog.test.ts index 311f3902..223b6184 100644 --- a/web/src/components/settings/__tests__/GroupMembersDialog.test.ts +++ b/web/src/components/settings/__tests__/GroupMembersDialog.test.ts @@ -7,8 +7,8 @@ import { createPinia, setActivePinia } from 'pinia' import { useAuthStore } from '@/stores/auth' vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ params: {} })), - useRouter: vi.fn(() => ({ push: vi.fn() })), + useRoute: vi.fn<() => unknown>(() => ({ params: {} })), + useRouter: vi.fn<() => unknown>(() => ({ push: vi.fn<(...args: unknown[]) => unknown>() })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -16,15 +16,15 @@ vi.mock('vue-router', () => ({ }, })) -const mockGET = vi.fn() -const mockPOST = vi.fn() -const mockDELETE = vi.fn() +const mockGET = vi.fn<(...args: unknown[]) => unknown>() +const mockPOST = vi.fn<(...args: unknown[]) => unknown>() +const mockDELETE = vi.fn<(...args: unknown[]) => unknown>() vi.mock('@/lib/api/client', () => ({ default: { GET: (...args: unknown[]) => mockGET(...args), POST: (...args: unknown[]) => mockPOST(...args), - PATCH: vi.fn(), + PATCH: vi.fn<(...args: unknown[]) => unknown>(), DELETE: (...args: unknown[]) => mockDELETE(...args), }, })) @@ -93,9 +93,8 @@ function bodyText(): string { let wrapper: VueWrapper async function mountDialog(props: { open?: boolean; groupId?: string; groupName?: string } = {}) { - const { default: GroupMembersDialog } = await import( - '@/components/settings/GroupMembersDialog.vue' - ) + const { default: GroupMembersDialog } = + await import('@/components/settings/GroupMembersDialog.vue') wrapper = mount(GroupMembersDialog, { props: { open: true, @@ -110,7 +109,9 @@ async function mountDialog(props: { open?: boolean; groupId?: string; groupName? // Clean up portaled DOM elements function cleanupPortals() { - document.querySelectorAll('[data-reka-portal], [data-radix-popper-content-wrapper]').forEach((el) => el.remove()) + document + .querySelectorAll('[data-reka-portal], [data-radix-popper-content-wrapper]') + .forEach((el) => el.remove()) } describe('GroupMembersDialog', () => { @@ -250,9 +251,7 @@ describe('GroupMembersDialog', () => { describe('add member', () => { it('shows available org members not already in group', async () => { - mockGroupMembersSuccess([ - makeGroupMember({ user_id: 'u1', email: 'alice@example.com' }), - ]) + mockGroupMembersSuccess([makeGroupMember({ user_id: 'u1', email: 'alice@example.com' })]) mockOrgMembersSuccess([ makeOrgMember({ user_id: 'u1', email: 'alice@example.com' }), makeOrgMember({ user_id: 'u2', email: 'bob@example.com', display_name: 'Bob' }), diff --git a/web/src/components/settings/__tests__/InviteMemberDialog.test.ts b/web/src/components/settings/__tests__/InviteMemberDialog.test.ts index aa08f61d..52686e87 100644 --- a/web/src/components/settings/__tests__/InviteMemberDialog.test.ts +++ b/web/src/components/settings/__tests__/InviteMemberDialog.test.ts @@ -7,8 +7,8 @@ import { createPinia, setActivePinia } from 'pinia' import { useAuthStore } from '@/stores/auth' vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ params: {} })), - useRouter: vi.fn(() => ({ push: vi.fn() })), + useRoute: vi.fn<() => unknown>(() => ({ params: {} })), + useRouter: vi.fn<() => unknown>(() => ({ push: vi.fn<(...args: unknown[]) => unknown>() })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -16,14 +16,14 @@ vi.mock('vue-router', () => ({ }, })) -const mockPOST = vi.fn() +const mockPOST = vi.fn<(...args: unknown[]) => unknown>() vi.mock('@/lib/api/client', () => ({ default: { - GET: vi.fn(), + GET: vi.fn<(...args: unknown[]) => unknown>(), POST: (...args: unknown[]) => mockPOST(...args), - PATCH: vi.fn(), - DELETE: vi.fn(), + PATCH: vi.fn<(...args: unknown[]) => unknown>(), + DELETE: vi.fn<(...args: unknown[]) => unknown>(), }, })) @@ -74,9 +74,8 @@ async function clickTestId(testId: string) { let wrapper: VueWrapper async function mountDialog(props: { open?: boolean; currentUserRole?: string } = {}) { - const { default: InviteMemberDialog } = await import( - '@/components/settings/InviteMemberDialog.vue' - ) + const { default: InviteMemberDialog } = + await import('@/components/settings/InviteMemberDialog.vue') wrapper = mount(InviteMemberDialog, { props: { open: true, currentUserRole: 'admin', ...props }, attachTo: document.body, diff --git a/web/src/components/watchlist/__tests__/AddItemDialog.test.ts b/web/src/components/watchlist/__tests__/AddItemDialog.test.ts index 0bbe1718..d1492428 100644 --- a/web/src/components/watchlist/__tests__/AddItemDialog.test.ts +++ b/web/src/components/watchlist/__tests__/AddItemDialog.test.ts @@ -7,8 +7,8 @@ import { createPinia, setActivePinia } from 'pinia' import { useAuthStore } from '@/stores/auth' vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ params: {} })), - useRouter: vi.fn(() => ({ push: vi.fn() })), + useRoute: vi.fn<() => unknown>(() => ({ params: {} })), + useRouter: vi.fn<() => unknown>(() => ({ push: vi.fn<(...args: unknown[]) => unknown>() })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -16,14 +16,14 @@ vi.mock('vue-router', () => ({ }, })) -const mockPOST = vi.fn() +const mockPOST = vi.fn<(...args: unknown[]) => unknown>() vi.mock('@/lib/api/client', () => ({ default: { - GET: vi.fn(), + GET: vi.fn<(...args: unknown[]) => unknown>(), POST: (...args: unknown[]) => mockPOST(...args), - PATCH: vi.fn(), - DELETE: vi.fn(), + PATCH: vi.fn<(...args: unknown[]) => unknown>(), + DELETE: vi.fn<(...args: unknown[]) => unknown>(), }, })) @@ -112,9 +112,7 @@ async function clickTestId(testId: string) { let wrapper: VueWrapper async function mountDialog(open = true) { - const { default: AddItemDialog } = await import( - '@/components/watchlist/AddItemDialog.vue' - ) + const { default: AddItemDialog } = await import('@/components/watchlist/AddItemDialog.vue') wrapper = mount(AddItemDialog, { props: { open, watchlistId: TEST_WATCHLIST_ID }, attachTo: document.body, diff --git a/web/src/components/watchlist/__tests__/CreateWatchlistDialog.test.ts b/web/src/components/watchlist/__tests__/CreateWatchlistDialog.test.ts index e06ad824..c1c1820c 100644 --- a/web/src/components/watchlist/__tests__/CreateWatchlistDialog.test.ts +++ b/web/src/components/watchlist/__tests__/CreateWatchlistDialog.test.ts @@ -7,8 +7,8 @@ import { createPinia, setActivePinia } from 'pinia' import { useAuthStore } from '@/stores/auth' vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ params: {} })), - useRouter: vi.fn(() => ({ push: vi.fn() })), + useRoute: vi.fn<() => unknown>(() => ({ params: {} })), + useRouter: vi.fn<() => unknown>(() => ({ push: vi.fn<(...args: unknown[]) => unknown>() })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -16,14 +16,14 @@ vi.mock('vue-router', () => ({ }, })) -const mockPOST = vi.fn() +const mockPOST = vi.fn<(...args: unknown[]) => unknown>() vi.mock('@/lib/api/client', () => ({ default: { - GET: vi.fn(), + GET: vi.fn<(...args: unknown[]) => unknown>(), POST: (...args: unknown[]) => mockPOST(...args), - PATCH: vi.fn(), - DELETE: vi.fn(), + PATCH: vi.fn<(...args: unknown[]) => unknown>(), + DELETE: vi.fn<(...args: unknown[]) => unknown>(), }, })) @@ -91,9 +91,8 @@ async function clickTestId(testId: string) { let wrapper: VueWrapper async function mountDialog(open = true) { - const { default: CreateWatchlistDialog } = await import( - '@/components/watchlist/CreateWatchlistDialog.vue' - ) + const { default: CreateWatchlistDialog } = + await import('@/components/watchlist/CreateWatchlistDialog.vue') wrapper = mount(CreateWatchlistDialog, { props: { open }, attachTo: document.body, @@ -166,7 +165,7 @@ describe('CreateWatchlistDialog', () => { ) // Verify the body includes name and description - const callArgs = mockPOST.mock.calls[0]! + const callArgs = mockPOST.mock.calls[0] as [string, { body: Record }] expect(callArgs[1].body.name).toBe('Test WL') expect(callArgs[1].body.description).toBe('Desc') }) @@ -183,7 +182,7 @@ describe('CreateWatchlistDialog', () => { await clickTestId('create-watchlist-btn') await flushPromises() - const callArgs = mockPOST.mock.calls[0]! + const callArgs = mockPOST.mock.calls[0] as [string, { body: Record }] expect(callArgs[1].body.name).toBe('Name Only') expect(callArgs[1].body.description).toBeNull() }) diff --git a/web/src/lib/api/__tests__/client.test.ts b/web/src/lib/api/__tests__/client.test.ts index 289ad5b0..744ab851 100644 --- a/web/src/lib/api/__tests__/client.test.ts +++ b/web/src/lib/api/__tests__/client.test.ts @@ -119,7 +119,7 @@ describe('refresh middleware', () => { it('does not attempt refresh for auth endpoints', async () => { const { refreshMiddleware } = await import('../client') - const fetchMock = vi.fn() + const fetchMock = vi.fn() globalThis.fetch = fetchMock const loginRequest = new Request('http://localhost/api/v1/auth/login') @@ -136,7 +136,7 @@ describe('refresh middleware', () => { it('does not attempt refresh for auth/me endpoint', async () => { const { refreshMiddleware } = await import('../client') - const fetchMock = vi.fn() + const fetchMock = vi.fn() globalThis.fetch = fetchMock const meRequest = new Request('http://localhost/api/v1/auth/me') @@ -153,7 +153,7 @@ describe('refresh middleware', () => { it('does not attempt refresh for the refresh endpoint itself', async () => { const { refreshMiddleware } = await import('../client') - const fetchMock = vi.fn() + const fetchMock = vi.fn() globalThis.fetch = fetchMock const refreshRequest = new Request('http://localhost/api/v1/auth/refresh') @@ -172,7 +172,7 @@ describe('refresh middleware', () => { const { refreshMiddleware } = await import('../client') const retryResponse = new Response('ok', { status: 200 }) - const fetchMock = vi.fn() + const fetchMock = vi.fn() // First call: refresh succeeds. fetchMock.mockResolvedValueOnce(new Response('', { status: 200 })) // Second call: retry the original request. @@ -198,7 +198,7 @@ describe('refresh middleware', () => { it('returns original 401 response when refresh fails', async () => { const { refreshMiddleware } = await import('../client') - const fetchMock = vi.fn() + const fetchMock = vi.fn() // Refresh returns 401 (failure). fetchMock.mockResolvedValueOnce(new Response('', { status: 401 })) globalThis.fetch = fetchMock diff --git a/web/src/router/__tests__/guards.test.ts b/web/src/router/__tests__/guards.test.ts index 08b57dc3..9909ef56 100644 --- a/web/src/router/__tests__/guards.test.ts +++ b/web/src/router/__tests__/guards.test.ts @@ -11,8 +11,8 @@ import { routes, authGuard, titleGuard } from '../index' // Mock the API client so fetchMe doesn't make real HTTP calls. vi.mock('@/lib/api/client', () => ({ default: { - GET: vi.fn(), - POST: vi.fn(), + GET: vi.fn<(...args: unknown[]) => unknown>(), + POST: vi.fn<(...args: unknown[]) => unknown>(), }, })) diff --git a/web/src/stores/__tests__/auth.test.ts b/web/src/stores/__tests__/auth.test.ts index aa0d23eb..4231a55f 100644 --- a/web/src/stores/__tests__/auth.test.ts +++ b/web/src/stores/__tests__/auth.test.ts @@ -8,8 +8,8 @@ import { useAuthStore } from '../auth' // Mock the API client vi.mock('@/lib/api/client', () => ({ default: { - GET: vi.fn(), - POST: vi.fn(), + GET: vi.fn<(...args: unknown[]) => unknown>(), + POST: vi.fn<(...args: unknown[]) => unknown>(), }, })) @@ -120,7 +120,11 @@ describe('auth store', () => { is_site_admin: false, orgs: [{ org_id: 'org-1', name: 'Org One', role: 'owner' }], } - vi.mocked(client.GET).mockResolvedValue({ data: meData, error: undefined, response: {} as Response }) + vi.mocked(client.GET).mockResolvedValue({ + data: meData, + error: undefined, + response: {} as Response, + }) const auth = useAuthStore() const result = await auth.fetchMe() @@ -131,7 +135,11 @@ describe('auth store', () => { }) it('returns false on API error', async () => { - vi.mocked(client.GET).mockResolvedValue({ data: undefined, error: { type: 'about:blank', detail: 'unauthorized' }, response: {} as Response }) + vi.mocked(client.GET).mockResolvedValue({ + data: undefined, + error: { type: 'about:blank', detail: 'unauthorized' }, + response: {} as Response, + }) const auth = useAuthStore() const result = await auth.fetchMe() @@ -148,7 +156,11 @@ describe('auth store', () => { is_site_admin: false, orgs: [{ org_id: 'only-org', name: 'Only Org', role: 'admin' }], } - vi.mocked(client.GET).mockResolvedValue({ data: meData, error: undefined, response: {} as Response }) + vi.mocked(client.GET).mockResolvedValue({ + data: meData, + error: undefined, + response: {} as Response, + }) const auth = useAuthStore() await auth.fetchMe() @@ -168,7 +180,11 @@ describe('auth store', () => { { org_id: 'org-2', name: 'Org Two', role: 'member' }, ], } - vi.mocked(client.GET).mockResolvedValue({ data: meData, error: undefined, response: {} as Response }) + vi.mocked(client.GET).mockResolvedValue({ + data: meData, + error: undefined, + response: {} as Response, + }) const auth = useAuthStore() await auth.fetchMe() @@ -185,7 +201,11 @@ describe('auth store', () => { is_site_admin: false, orgs: [{ org_id: 'current-org', name: 'Current', role: 'admin' }], } - vi.mocked(client.GET).mockResolvedValue({ data: meData, error: undefined, response: {} as Response }) + vi.mocked(client.GET).mockResolvedValue({ + data: meData, + error: undefined, + response: {} as Response, + }) const auth = useAuthStore() await auth.fetchMe() @@ -207,7 +227,11 @@ describe('auth store', () => { { org_id: 'org-2', name: 'Org Two', role: 'member' }, ], } - vi.mocked(client.GET).mockResolvedValue({ data: meData, error: undefined, response: {} as Response }) + vi.mocked(client.GET).mockResolvedValue({ + data: meData, + error: undefined, + response: {} as Response, + }) const auth = useAuthStore() await auth.fetchMe() @@ -230,7 +254,11 @@ describe('auth store', () => { is_site_admin: false, orgs: [], } - vi.mocked(client.GET).mockResolvedValue({ data: meData, error: undefined, response: {} as Response }) + vi.mocked(client.GET).mockResolvedValue({ + data: meData, + error: undefined, + response: {} as Response, + }) const auth = useAuthStore() await auth.fetchMe() @@ -239,7 +267,11 @@ describe('auth store', () => { }) it('is set to true after failed fetchMe', async () => { - vi.mocked(client.GET).mockResolvedValue({ data: undefined, error: { type: 'about:blank', detail: 'unauthorized' }, response: {} as Response }) + vi.mocked(client.GET).mockResolvedValue({ + data: undefined, + error: { type: 'about:blank', detail: 'unauthorized' }, + response: {} as Response, + }) const auth = useAuthStore() await auth.fetchMe() @@ -248,7 +280,11 @@ describe('auth store', () => { }) it('is reset to false on clearAuth', async () => { - vi.mocked(client.POST).mockResolvedValue({ data: undefined, error: undefined, response: {} as Response }) + vi.mocked(client.POST).mockResolvedValue({ + data: undefined, + error: undefined, + response: {} as Response, + }) const auth = useAuthStore() auth.sessionChecked = true @@ -268,8 +304,16 @@ describe('auth store', () => { is_site_admin: false, orgs: [{ org_id: 'org-1', name: 'Org One', role: 'admin' }], } - vi.mocked(client.POST).mockResolvedValue({ data: undefined, error: undefined, response: {} as Response }) - vi.mocked(client.GET).mockResolvedValue({ data: meData, error: undefined, response: {} as Response }) + vi.mocked(client.POST).mockResolvedValue({ + data: undefined, + error: undefined, + response: {} as Response, + }) + vi.mocked(client.GET).mockResolvedValue({ + data: meData, + error: undefined, + response: {} as Response, + }) const auth = useAuthStore() const result = await auth.login('test@example.com', 'password123') @@ -280,7 +324,11 @@ describe('auth store', () => { }) it('returns error on failed login', async () => { - vi.mocked(client.POST).mockResolvedValue({ data: undefined, error: { type: 'about:blank', detail: 'bad creds' }, response: {} as Response }) + vi.mocked(client.POST).mockResolvedValue({ + data: undefined, + error: { type: 'about:blank', detail: 'bad creds' }, + response: {} as Response, + }) const auth = useAuthStore() const result = await auth.login('bad@example.com', 'wrong') @@ -293,7 +341,11 @@ describe('auth store', () => { describe('logout', () => { it('calls logout endpoint and clears auth state', async () => { - vi.mocked(client.POST).mockResolvedValue({ data: undefined, error: undefined, response: {} as Response }) + vi.mocked(client.POST).mockResolvedValue({ + data: undefined, + error: undefined, + response: {} as Response, + }) const auth = useAuthStore() auth.user = { diff --git a/web/src/views/__tests__/CreateOrgView.test.ts b/web/src/views/__tests__/CreateOrgView.test.ts index 5d37fade..5b6b7717 100644 --- a/web/src/views/__tests__/CreateOrgView.test.ts +++ b/web/src/views/__tests__/CreateOrgView.test.ts @@ -6,11 +6,11 @@ import { mount, flushPromises } from '@vue/test-utils' import { createPinia, setActivePinia } from 'pinia' import { nextTick } from 'vue' -const mockPush = vi.fn() +const mockPush = vi.fn<(...args: unknown[]) => unknown>() vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ query: {} })), - useRouter: vi.fn(() => ({ push: mockPush })), + useRoute: vi.fn<() => { query: Record }>(() => ({ query: {} })), + useRouter: vi.fn<() => { push: typeof mockPush }>(() => ({ push: mockPush })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -18,11 +18,11 @@ vi.mock('vue-router', () => ({ }, })) -const mockPOST = vi.fn() +const mockPOST = vi.fn<(...args: unknown[]) => unknown>() vi.mock('@/lib/api/client', () => ({ default: { - GET: vi.fn(), + GET: vi.fn<(...args: unknown[]) => unknown>(), POST: (...args: unknown[]) => mockPOST(...args), }, })) @@ -65,7 +65,10 @@ describe('CreateOrgView', () => { let resolvePost: (value: unknown) => void mockPOST.mockImplementation( - () => new Promise((resolve) => { resolvePost = resolve }), + () => + new Promise((resolve) => { + resolvePost = resolve + }), ) const wrapper = await mountCreateOrg() diff --git a/web/src/views/__tests__/CveDetailView.test.ts b/web/src/views/__tests__/CveDetailView.test.ts index 4db8745e..a4bfe559 100644 --- a/web/src/views/__tests__/CveDetailView.test.ts +++ b/web/src/views/__tests__/CveDetailView.test.ts @@ -12,13 +12,15 @@ type CVESourceResponse = components['schemas']['CVESourceResponse'] let mockRouteParams: Record = {} vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ + useRoute: vi.fn<() => { params: Record; query: Record }>(() => ({ params: mockRouteParams, query: {}, })), - useRouter: vi.fn(() => ({ - push: vi.fn(), - back: vi.fn(), + useRouter: vi.fn< + () => { push: (...args: unknown[]) => unknown; back: (...args: unknown[]) => unknown } + >(() => ({ + push: vi.fn<(...args: unknown[]) => unknown>(), + back: vi.fn<(...args: unknown[]) => unknown>(), })), RouterLink: { name: 'RouterLink', @@ -27,12 +29,12 @@ vi.mock('vue-router', () => ({ }, })) -const mockGET = vi.fn() +const mockGET = vi.fn<(path: string, ...args: unknown[]) => unknown>() vi.mock('@/lib/api/client', () => ({ default: { - GET: (...args: unknown[]) => mockGET(...args), - POST: vi.fn(), + GET: (path: string, ...args: unknown[]) => mockGET(path, ...args), + POST: vi.fn<(...args: unknown[]) => unknown>(), }, })) @@ -200,7 +202,7 @@ describe('CveDetailView', () => { await flushPromises() const scoreCards = wrapper.findAll('[data-testid="score-card"]') - const cvssCard = scoreCards.find(c => c.text().includes('CVSS')) + const cvssCard = scoreCards.find((c) => c.text().includes('CVSS')) expect(cvssCard?.text()).toContain('N/A') }) @@ -210,7 +212,7 @@ describe('CveDetailView', () => { await flushPromises() const scoreCards = wrapper.findAll('[data-testid="score-card"]') - const epssCard = scoreCards.find(c => c.text().includes('EPSS')) + const epssCard = scoreCards.find((c) => c.text().includes('EPSS')) expect(epssCard?.text()).toContain('N/A') }) @@ -280,7 +282,7 @@ describe('CveDetailView', () => { await flushPromises() const links = wrapper.findAll('a[target="_blank"]') - const urls = links.map(l => l.attributes('href')) + const urls = links.map((l) => l.attributes('href')) expect(urls).toContain('https://nvd.nist.gov/vuln/detail/CVE-2024-12345') expect(urls).toContain('https://github.com/advisories/GHSA-xxxx-xxxx-xxxx') }) @@ -370,7 +372,9 @@ describe('CveDetailView', () => { // Set up a slow response (will become stale) let resolveStale: (v: unknown) => void - const stalePromise = new Promise((resolve) => { resolveStale = resolve }) + const stalePromise = new Promise((resolve) => { + resolveStale = resolve + }) mockGET.mockReturnValueOnce(stalePromise) // Trigger first refetch — increments fetchId diff --git a/web/src/views/__tests__/CveSearchView.test.ts b/web/src/views/__tests__/CveSearchView.test.ts index bd6e3a17..ecf22a24 100644 --- a/web/src/views/__tests__/CveSearchView.test.ts +++ b/web/src/views/__tests__/CveSearchView.test.ts @@ -8,13 +8,18 @@ import type { components } from '@/lib/api/schema' type CVEItem = components['schemas']['CVEItem'] -const mockPush = vi.fn() -const mockReplace = vi.fn() +const mockPush = vi.fn<(...args: unknown[]) => unknown>() +const mockReplace = vi.fn<(...args: unknown[]) => unknown>() let mockRouteQuery: Record = {} vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ query: mockRouteQuery })), - useRouter: vi.fn(() => ({ push: mockPush, replace: mockReplace })), + useRoute: vi.fn<() => { query: Record }>(() => ({ + query: mockRouteQuery, + })), + useRouter: vi.fn<() => { push: typeof mockPush; replace: typeof mockReplace }>(() => ({ + push: mockPush, + replace: mockReplace, + })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -22,12 +27,12 @@ vi.mock('vue-router', () => ({ }, })) -const mockGET = vi.fn() +const mockGET = vi.fn<(...args: unknown[]) => unknown>() vi.mock('@/lib/api/client', () => ({ default: { GET: (...args: unknown[]) => mockGET(...args), - POST: vi.fn(), + POST: vi.fn<(...args: unknown[]) => unknown>(), }, })) @@ -114,11 +119,14 @@ describe('CveSearchView', () => { await mountView() await flushPromises() - expect(mockGET).toHaveBeenCalledWith('/cves', expect.objectContaining({ - params: expect.objectContaining({ - query: expect.any(Object), + expect(mockGET).toHaveBeenCalledWith( + '/cves', + expect.objectContaining({ + params: expect.objectContaining({ + query: expect.any(Object), + }), }), - })) + ) }) it('displays fetched CVE results', async () => { @@ -139,13 +147,16 @@ describe('CveSearchView', () => { await mountView() await flushPromises() - expect(mockGET).toHaveBeenCalledWith('/cves', expect.objectContaining({ - params: { - query: expect.objectContaining({ - q: 'apache', - }), - }, - })) + expect(mockGET).toHaveBeenCalledWith( + '/cves', + expect.objectContaining({ + params: { + query: expect.objectContaining({ + q: 'apache', + }), + }, + }), + ) }) it('passes severity filter to API', async () => { @@ -154,13 +165,16 @@ describe('CveSearchView', () => { await mountView() await flushPromises() - expect(mockGET).toHaveBeenCalledWith('/cves', expect.objectContaining({ - params: { - query: expect.objectContaining({ - severity: ['critical'], - }), - }, - })) + expect(mockGET).toHaveBeenCalledWith( + '/cves', + expect.objectContaining({ + params: { + query: expect.objectContaining({ + severity: ['critical'], + }), + }, + }), + ) }) }) @@ -190,11 +204,13 @@ describe('CveSearchView', () => { await wrapper.find('form').trigger('submit') await flushPromises() - expect(mockReplace).toHaveBeenCalledWith(expect.objectContaining({ - query: expect.objectContaining({ - q: 'openssl', + expect(mockReplace).toHaveBeenCalledWith( + expect.objectContaining({ + query: expect.objectContaining({ + q: 'openssl', + }), }), - })) + ) }) }) @@ -240,13 +256,16 @@ describe('CveSearchView', () => { await wrapper.find('[data-testid="next-page"]').trigger('click') await flushPromises() - expect(mockGET).toHaveBeenCalledWith('/cves', expect.objectContaining({ - params: { - query: expect.objectContaining({ - cursor: 'cursor-page2', - }), - }, - })) + expect(mockGET).toHaveBeenCalledWith( + '/cves', + expect.objectContaining({ + params: { + query: expect.objectContaining({ + cursor: 'cursor-page2', + }), + }, + }), + ) expect(wrapper.text()).toContain('CVE-2024-0002') }) diff --git a/web/src/views/__tests__/FeedStatusView.test.ts b/web/src/views/__tests__/FeedStatusView.test.ts index ae6d8cd0..f959a355 100644 --- a/web/src/views/__tests__/FeedStatusView.test.ts +++ b/web/src/views/__tests__/FeedStatusView.test.ts @@ -5,8 +5,10 @@ import { describe, it, expect, beforeEach, vi, afterEach } from 'vitest' import { mount, flushPromises } from '@vue/test-utils' vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ path: '/admin/feeds' })), - useRouter: vi.fn(() => ({ push: vi.fn() })), + useRoute: vi.fn<() => { path: string }>(() => ({ path: '/admin/feeds' })), + useRouter: vi.fn<() => { push: (...args: unknown[]) => unknown }>(() => ({ + push: vi.fn<(...args: unknown[]) => unknown>(), + })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -14,8 +16,8 @@ vi.mock('vue-router', () => ({ }, })) -const mockGET = vi.fn() -const mockPOST = vi.fn() +const mockGET = vi.fn<(...args: unknown[]) => unknown>() +const mockPOST = vi.fn<(...args: unknown[]) => unknown>() vi.mock('@/lib/api/client', () => ({ default: { diff --git a/web/src/views/__tests__/ForgotPasswordView.test.ts b/web/src/views/__tests__/ForgotPasswordView.test.ts index efc2c661..efe29204 100644 --- a/web/src/views/__tests__/ForgotPasswordView.test.ts +++ b/web/src/views/__tests__/ForgotPasswordView.test.ts @@ -7,8 +7,10 @@ import { createPinia, setActivePinia } from 'pinia' import { nextTick } from 'vue' vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ query: {} })), - useRouter: vi.fn(() => ({ push: vi.fn() })), + useRoute: vi.fn<() => { query: Record }>(() => ({ query: {} })), + useRouter: vi.fn<() => { push: (...args: unknown[]) => unknown }>(() => ({ + push: vi.fn<(...args: unknown[]) => unknown>(), + })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -18,14 +20,14 @@ vi.mock('vue-router', () => ({ vi.mock('@/lib/api/client', () => ({ default: { - GET: vi.fn(), - POST: vi.fn(), + GET: vi.fn<(...args: unknown[]) => unknown>(), + POST: vi.fn<(...args: unknown[]) => unknown>(), }, })) import { useAuthStore } from '@/stores/auth' -const mockFetch = vi.fn() +const mockFetch = vi.fn<(...args: unknown[]) => unknown>() vi.stubGlobal('fetch', mockFetch) async function mountForgotPassword() { @@ -96,7 +98,10 @@ describe('ForgotPasswordView', () => { it('shows success message even on failure (anti-enumeration)', async () => { const auth = useAuthStore() - vi.spyOn(auth, 'forgotPassword').mockResolvedValue({ success: false, error: 'something went wrong' }) + vi.spyOn(auth, 'forgotPassword').mockResolvedValue({ + success: false, + error: 'something went wrong', + }) const wrapper = await mountForgotPassword() @@ -125,7 +130,10 @@ describe('ForgotPasswordView', () => { const auth = useAuthStore() let resolveForgot: (value: { success: boolean }) => void vi.spyOn(auth, 'forgotPassword').mockImplementation( - () => new Promise((resolve) => { resolveForgot = resolve }), + () => + new Promise((resolve) => { + resolveForgot = resolve + }), ) const wrapper = await mountForgotPassword() diff --git a/web/src/views/__tests__/GroupsView.test.ts b/web/src/views/__tests__/GroupsView.test.ts index e201066f..2a94212e 100644 --- a/web/src/views/__tests__/GroupsView.test.ts +++ b/web/src/views/__tests__/GroupsView.test.ts @@ -6,11 +6,11 @@ import { mount, flushPromises, VueWrapper } from '@vue/test-utils' import { createPinia, setActivePinia } from 'pinia' import { useAuthStore } from '@/stores/auth' -const mockPush = vi.fn() +const mockPush = vi.fn<(...args: unknown[]) => unknown>() vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ params: {} })), - useRouter: vi.fn(() => ({ push: mockPush })), + useRoute: vi.fn<() => { params: Record }>(() => ({ params: {} })), + useRouter: vi.fn<() => { push: typeof mockPush }>(() => ({ push: mockPush })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -18,14 +18,14 @@ vi.mock('vue-router', () => ({ }, })) -const mockGET = vi.fn() -const mockDELETE = vi.fn() +const mockGET = vi.fn<(...args: unknown[]) => unknown>() +const mockDELETE = vi.fn<(...args: unknown[]) => unknown>() vi.mock('@/lib/api/client', () => ({ default: { GET: (...args: unknown[]) => mockGET(...args), - POST: vi.fn(), - PATCH: vi.fn(), + POST: vi.fn<(...args: unknown[]) => unknown>(), + PATCH: vi.fn<(...args: unknown[]) => unknown>(), DELETE: (...args: unknown[]) => mockDELETE(...args), }, })) @@ -97,7 +97,9 @@ async function mountView() { // Clean up portaled DOM elements (reka-ui Select, AlertDialog, Dialog) function cleanupPortals() { - document.querySelectorAll('[data-reka-portal], [data-radix-popper-content-wrapper]').forEach((el) => el.remove()) + document + .querySelectorAll('[data-reka-portal], [data-radix-popper-content-wrapper]') + .forEach((el) => el.remove()) } describe('GroupsView', () => { diff --git a/web/src/views/__tests__/InvitationView.test.ts b/web/src/views/__tests__/InvitationView.test.ts index ee585084..39738d9a 100644 --- a/web/src/views/__tests__/InvitationView.test.ts +++ b/web/src/views/__tests__/InvitationView.test.ts @@ -7,11 +7,11 @@ import { createPinia, setActivePinia } from 'pinia' import { useAuthStore } from '@/stores/auth' let mockRouteParams: Record = { token: 'test-token-abc' } -const mockPush = vi.fn() +const mockPush = vi.fn<(...args: unknown[]) => unknown>() vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ params: mockRouteParams })), - useRouter: vi.fn(() => ({ push: mockPush })), + useRoute: vi.fn<() => { params: Record }>(() => ({ params: mockRouteParams })), + useRouter: vi.fn<() => { push: typeof mockPush }>(() => ({ push: mockPush })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -19,8 +19,8 @@ vi.mock('vue-router', () => ({ }, })) -const mockGET = vi.fn() -const mockPOST = vi.fn() +const mockGET = vi.fn<(...args: unknown[]) => unknown>() +const mockPOST = vi.fn<(...args: unknown[]) => unknown>() vi.mock('@/lib/api/client', () => ({ default: { @@ -178,7 +178,7 @@ describe('InvitationView', () => { user_id: 'u1', email: 'sam@example.com', display_name: 'Sam Carter', - is_site_admin: false, + is_site_admin: false, orgs: [ { org_id: 'org-old', name: 'Old Org', role: 'admin' }, { org_id: 'org-new', name: 'Acme Corp', role: 'member' }, diff --git a/web/src/views/__tests__/LoginView.test.ts b/web/src/views/__tests__/LoginView.test.ts index 22b9057e..8de55d27 100644 --- a/web/src/views/__tests__/LoginView.test.ts +++ b/web/src/views/__tests__/LoginView.test.ts @@ -6,12 +6,12 @@ import { mount, flushPromises } from '@vue/test-utils' import { createPinia, setActivePinia } from 'pinia' import { nextTick } from 'vue' -const mockPush = vi.fn() +const mockPush = vi.fn<(...args: unknown[]) => unknown>() const mockRouteQuery = { redirect: undefined as string | undefined } vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ query: mockRouteQuery })), - useRouter: vi.fn(() => ({ push: mockPush })), + useRoute: vi.fn<() => { query: typeof mockRouteQuery }>(() => ({ query: mockRouteQuery })), + useRouter: vi.fn<() => { push: typeof mockPush }>(() => ({ push: mockPush })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -21,8 +21,8 @@ vi.mock('vue-router', () => ({ vi.mock('@/lib/api/client', () => ({ default: { - GET: vi.fn(), - POST: vi.fn(), + GET: vi.fn<(...args: unknown[]) => unknown>(), + POST: vi.fn<(...args: unknown[]) => unknown>(), }, })) @@ -271,7 +271,7 @@ describe('LoginView', () => { it('GitHub button redirects to OAuth endpoint', async () => { mockProvidersResponse(true, false) const originalLocation = window.location.href - const hrefSetter = vi.fn() + const hrefSetter = vi.fn<(v: string) => void>() Object.defineProperty(window, 'location', { value: { ...window.location, @@ -299,7 +299,7 @@ describe('LoginView', () => { it('Google button redirects to OAuth endpoint', async () => { mockProvidersResponse(false, true) const originalLocation = window.location.href - const hrefSetter = vi.fn() + const hrefSetter = vi.fn<(v: string) => void>() Object.defineProperty(window, 'location', { value: { ...window.location, diff --git a/web/src/views/__tests__/MembersView.test.ts b/web/src/views/__tests__/MembersView.test.ts index 5b7eaf22..266a53fd 100644 --- a/web/src/views/__tests__/MembersView.test.ts +++ b/web/src/views/__tests__/MembersView.test.ts @@ -6,11 +6,11 @@ import { mount, flushPromises, VueWrapper } from '@vue/test-utils' import { createPinia, setActivePinia } from 'pinia' import { useAuthStore } from '@/stores/auth' -const mockPush = vi.fn() +const mockPush = vi.fn<(...args: unknown[]) => unknown>() vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ params: {} })), - useRouter: vi.fn(() => ({ push: mockPush })), + useRoute: vi.fn<() => unknown>(() => ({ params: {} })), + useRouter: vi.fn<() => unknown>(() => ({ push: mockPush })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -18,14 +18,14 @@ vi.mock('vue-router', () => ({ }, })) -const mockGET = vi.fn() -const mockPATCH = vi.fn() -const mockDELETE = vi.fn() +const mockGET = vi.fn<(...args: unknown[]) => unknown>() +const mockPATCH = vi.fn<(...args: unknown[]) => unknown>() +const mockDELETE = vi.fn<(...args: unknown[]) => unknown>() vi.mock('@/lib/api/client', () => ({ default: { GET: (...args: unknown[]) => mockGET(...args), - POST: vi.fn(), + POST: vi.fn<(...args: unknown[]) => unknown>(), PATCH: (...args: unknown[]) => mockPATCH(...args), DELETE: (...args: unknown[]) => mockDELETE(...args), }, @@ -121,7 +121,9 @@ async function openRoleSelectAndGetOptions(): Promise { trigger.hasPointerCapture = () => false trigger.releasePointerCapture = () => {} } - trigger.dispatchEvent(new PointerEvent('pointerdown', { bubbles: true, cancelable: true, button: 0, pointerId: 1 })) + trigger.dispatchEvent( + new PointerEvent('pointerdown', { bubbles: true, cancelable: true, button: 0, pointerId: 1 }), + ) await flushPromises() const options = document.querySelectorAll('[role="option"]') return Array.from(options).map((el) => el.textContent?.trim() ?? '') @@ -139,7 +141,9 @@ async function mountView() { // Clean up portaled DOM elements (reka-ui Select, AlertDialog, Dialog) function cleanupPortals() { - document.querySelectorAll('[data-reka-portal], [data-radix-popper-content-wrapper]').forEach((el) => el.remove()) + document + .querySelectorAll('[data-reka-portal], [data-radix-popper-content-wrapper]') + .forEach((el) => el.remove()) } describe('MembersView', () => { @@ -193,8 +197,18 @@ describe('MembersView', () => { it('renders members table with data', async () => { setupAuthStore('admin') mockMembersSuccess([ - makeMember({ user_id: 'u1', email: 'alice@example.com', display_name: 'Alice', role: 'admin' }), - makeMember({ user_id: 'u2', email: 'bob@example.com', display_name: 'Bob', role: 'member' }), + makeMember({ + user_id: 'u1', + email: 'alice@example.com', + display_name: 'Alice', + role: 'admin', + }), + makeMember({ + user_id: 'u2', + email: 'bob@example.com', + display_name: 'Bob', + role: 'member', + }), ]) mockInvitationsSuccess([]) await mountView() @@ -322,9 +336,7 @@ describe('MembersView', () => { it('hides remove button on owner members', async () => { setupAuthStore('admin') - mockMembersSuccess([ - makeMember({ user_id: 'u1', role: 'owner', email: 'owner@example.com' }), - ]) + mockMembersSuccess([makeMember({ user_id: 'u1', role: 'owner', email: 'owner@example.com' })]) mockInvitationsSuccess([]) await mountView() await flushPromises() @@ -364,8 +376,18 @@ describe('MembersView', () => { it('calls DELETE on confirmation and removes from list', async () => { setupAuthStore('admin') mockMembersSuccess([ - makeMember({ user_id: 'u1', email: 'keep@example.com', display_name: 'Keep', role: 'member' }), - makeMember({ user_id: 'u2', email: 'remove@example.com', display_name: 'Remove', role: 'member' }), + makeMember({ + user_id: 'u1', + email: 'keep@example.com', + display_name: 'Keep', + role: 'member', + }), + makeMember({ + user_id: 'u2', + email: 'remove@example.com', + display_name: 'Remove', + role: 'member', + }), ]) mockInvitationsSuccess([]) await mountView() @@ -428,9 +450,7 @@ describe('MembersView', () => { it('shows role select for admin on non-owner members', async () => { setupAuthStore('admin') - mockMembersSuccess([ - makeMember({ user_id: 'u1', role: 'member' }), - ]) + mockMembersSuccess([makeMember({ user_id: 'u1', role: 'member' })]) mockInvitationsSuccess([]) await mountView() await flushPromises() @@ -441,9 +461,7 @@ describe('MembersView', () => { it('shows plain text role for owner members (not changeable)', async () => { setupAuthStore('admin') - mockMembersSuccess([ - makeMember({ user_id: 'u1', role: 'owner' }), - ]) + mockMembersSuccess([makeMember({ user_id: 'u1', role: 'owner' })]) mockInvitationsSuccess([]) await mountView() await flushPromises() @@ -458,9 +476,7 @@ describe('MembersView', () => { it('calls PATCH when role is changed', async () => { setupAuthStore('owner') - mockMembersSuccess([ - makeMember({ user_id: 'u1', role: 'member' }), - ]) + mockMembersSuccess([makeMember({ user_id: 'u1', role: 'member' })]) mockInvitationsSuccess([]) await mountView() await flushPromises() @@ -484,9 +500,7 @@ describe('MembersView', () => { it('shows plain text role badge for non-admin users', async () => { setupAuthStore('viewer') - mockMembersSuccess([ - makeMember({ user_id: 'u1', role: 'member' }), - ]) + mockMembersSuccess([makeMember({ user_id: 'u1', role: 'member' })]) await mountView() await flushPromises() @@ -552,9 +566,7 @@ describe('MembersView', () => { it('shows error when cancelling invitation fails', async () => { setupAuthStore('admin') mockMembersSuccess([makeMember()]) - mockInvitationsSuccess([ - makeInvitation({ id: 'inv-1', email: 'fail@example.com' }), - ]) + mockInvitationsSuccess([makeInvitation({ id: 'inv-1', email: 'fail@example.com' })]) await mountView() await flushPromises() @@ -577,9 +589,7 @@ describe('MembersView', () => { describe('role change error handling', () => { it('reverts role display and shows error when PATCH fails', async () => { setupAuthStore('owner') - mockMembersSuccess([ - makeMember({ user_id: 'u1', role: 'admin', email: 'admin@example.com' }), - ]) + mockMembersSuccess([makeMember({ user_id: 'u1', role: 'admin', email: 'admin@example.com' })]) mockInvitationsSuccess([]) await mountView() await flushPromises() diff --git a/web/src/views/__tests__/NotFoundView.test.ts b/web/src/views/__tests__/NotFoundView.test.ts index 86f4bdb5..f9822485 100644 --- a/web/src/views/__tests__/NotFoundView.test.ts +++ b/web/src/views/__tests__/NotFoundView.test.ts @@ -5,8 +5,8 @@ import { describe, it, expect, beforeEach, vi } from 'vitest' import { mount } from '@vue/test-utils' vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ path: '/nonexistent' })), - useRouter: vi.fn(() => ({ push: vi.fn() })), + useRoute: vi.fn<() => unknown>(() => ({ path: '/nonexistent' })), + useRouter: vi.fn<() => unknown>(() => ({ push: vi.fn<(...args: unknown[]) => unknown>() })), RouterLink: { name: 'RouterLink', props: ['to'], diff --git a/web/src/views/__tests__/RegisterView.test.ts b/web/src/views/__tests__/RegisterView.test.ts index 99352f27..7018452f 100644 --- a/web/src/views/__tests__/RegisterView.test.ts +++ b/web/src/views/__tests__/RegisterView.test.ts @@ -6,11 +6,11 @@ import { mount, flushPromises } from '@vue/test-utils' import { createPinia, setActivePinia } from 'pinia' import { nextTick } from 'vue' -const mockPush = vi.fn() +const mockPush = vi.fn<(...args: unknown[]) => unknown>() vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ query: {} })), - useRouter: vi.fn(() => ({ push: mockPush })), + useRoute: vi.fn<() => { query: Record }>(() => ({ query: {} })), + useRouter: vi.fn<() => { push: typeof mockPush }>(() => ({ push: mockPush })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -20,8 +20,8 @@ vi.mock('vue-router', () => ({ vi.mock('@/lib/api/client', () => ({ default: { - GET: vi.fn(), - POST: vi.fn(), + GET: vi.fn<(...args: unknown[]) => unknown>(), + POST: vi.fn<(...args: unknown[]) => unknown>(), }, })) diff --git a/web/src/views/__tests__/ResetPasswordView.test.ts b/web/src/views/__tests__/ResetPasswordView.test.ts index 226d388d..aea30908 100644 --- a/web/src/views/__tests__/ResetPasswordView.test.ts +++ b/web/src/views/__tests__/ResetPasswordView.test.ts @@ -6,12 +6,12 @@ import { mount, flushPromises } from '@vue/test-utils' import { createPinia, setActivePinia } from 'pinia' import { nextTick } from 'vue' -const mockPush = vi.fn() +const mockPush = vi.fn<(...args: unknown[]) => unknown>() const mockRouteQuery = { token: undefined as string | undefined } vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ query: mockRouteQuery })), - useRouter: vi.fn(() => ({ push: mockPush })), + useRoute: vi.fn<() => { query: typeof mockRouteQuery }>(() => ({ query: mockRouteQuery })), + useRouter: vi.fn<() => { push: typeof mockPush }>(() => ({ push: mockPush })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -21,14 +21,14 @@ vi.mock('vue-router', () => ({ vi.mock('@/lib/api/client', () => ({ default: { - GET: vi.fn(), - POST: vi.fn(), + GET: vi.fn<(...args: unknown[]) => unknown>(), + POST: vi.fn<(...args: unknown[]) => unknown>(), }, })) import { useAuthStore } from '@/stores/auth' -const mockFetch = vi.fn() +const mockFetch = vi.fn<(...args: unknown[]) => unknown>() vi.stubGlobal('fetch', mockFetch) async function mountResetPassword() { @@ -118,7 +118,10 @@ describe('ResetPasswordView', () => { await wrapper.find('form').trigger('submit') await flushPromises() - expect(auth.resetPassword).toHaveBeenCalledWith('valid-hex-token-abc123', 'new-password-1234567') + expect(auth.resetPassword).toHaveBeenCalledWith( + 'valid-hex-token-abc123', + 'new-password-1234567', + ) }) it('shows success message after successful reset', async () => { @@ -187,7 +190,10 @@ describe('ResetPasswordView', () => { const auth = useAuthStore() let resolveReset: (value: { success: boolean }) => void vi.spyOn(auth, 'resetPassword').mockImplementation( - () => new Promise((resolve) => { resolveReset = resolve }), + () => + new Promise((resolve) => { + resolveReset = resolve + }), ) const wrapper = await mountResetPassword() diff --git a/web/src/views/__tests__/VerifyEmailView.test.ts b/web/src/views/__tests__/VerifyEmailView.test.ts index ac0bbcfc..14ac4b31 100644 --- a/web/src/views/__tests__/VerifyEmailView.test.ts +++ b/web/src/views/__tests__/VerifyEmailView.test.ts @@ -8,8 +8,10 @@ import { createPinia, setActivePinia } from 'pinia' const mockRouteQuery = { token: undefined as string | undefined } vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ query: mockRouteQuery })), - useRouter: vi.fn(() => ({ push: vi.fn() })), + useRoute: vi.fn<() => { query: typeof mockRouteQuery }>(() => ({ query: mockRouteQuery })), + useRouter: vi.fn<() => { push: (...args: unknown[]) => unknown }>(() => ({ + push: vi.fn<(...args: unknown[]) => unknown>(), + })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -19,14 +21,14 @@ vi.mock('vue-router', () => ({ vi.mock('@/lib/api/client', () => ({ default: { - GET: vi.fn(), - POST: vi.fn(), + GET: vi.fn<(...args: unknown[]) => unknown>(), + POST: vi.fn<(...args: unknown[]) => unknown>(), }, })) import { useAuthStore } from '@/stores/auth' -const mockFetch = vi.fn() +const mockFetch = vi.fn<(...args: unknown[]) => unknown>() vi.stubGlobal('fetch', mockFetch) async function mountVerifyEmail() { @@ -109,7 +111,10 @@ describe('VerifyEmailView', () => { it('shows helpful expired link text on error', async () => { const auth = useAuthStore() - vi.spyOn(auth, 'verifyEmail').mockResolvedValue({ success: false, error: 'Verification failed' }) + vi.spyOn(auth, 'verifyEmail').mockResolvedValue({ + success: false, + error: 'Verification failed', + }) const wrapper = await mountVerifyEmail() await flushPromises() diff --git a/web/src/views/__tests__/WatchlistDetailView.test.ts b/web/src/views/__tests__/WatchlistDetailView.test.ts index f3a37ce8..458cc9b1 100644 --- a/web/src/views/__tests__/WatchlistDetailView.test.ts +++ b/web/src/views/__tests__/WatchlistDetailView.test.ts @@ -7,11 +7,11 @@ import { createPinia, setActivePinia } from 'pinia' import { useAuthStore } from '@/stores/auth' let mockRouteParams: Record = { id: 'wl-123' } -const mockPush = vi.fn() +const mockPush = vi.fn<(...args: unknown[]) => unknown>() vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ params: mockRouteParams })), - useRouter: vi.fn(() => ({ push: mockPush })), + useRoute: vi.fn<() => { params: Record }>(() => ({ params: mockRouteParams })), + useRouter: vi.fn<() => { push: typeof mockPush }>(() => ({ push: mockPush })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -19,14 +19,14 @@ vi.mock('vue-router', () => ({ }, })) -const mockGET = vi.fn() -const mockPATCH = vi.fn() -const mockDELETE = vi.fn() +const mockGET = vi.fn<(...args: unknown[]) => unknown>() +const mockPATCH = vi.fn<(...args: unknown[]) => unknown>() +const mockDELETE = vi.fn<(...args: unknown[]) => unknown>() vi.mock('@/lib/api/client', () => ({ default: { GET: (...args: unknown[]) => mockGET(...args), - POST: vi.fn(), + POST: vi.fn<(...args: unknown[]) => unknown>(), PATCH: (...args: unknown[]) => mockPATCH(...args), DELETE: (...args: unknown[]) => mockDELETE(...args), }, @@ -134,9 +134,7 @@ async function clickTestId(testId: string) { let wrapper: VueWrapper async function mountView() { - const { default: WatchlistDetailView } = await import( - '@/views/WatchlistDetailView.vue' - ) + const { default: WatchlistDetailView } = await import('@/views/WatchlistDetailView.vue') wrapper = mount(WatchlistDetailView, { attachTo: document.body, }) diff --git a/web/src/views/__tests__/WatchlistListView.test.ts b/web/src/views/__tests__/WatchlistListView.test.ts index a580dc5f..13892b7f 100644 --- a/web/src/views/__tests__/WatchlistListView.test.ts +++ b/web/src/views/__tests__/WatchlistListView.test.ts @@ -6,11 +6,11 @@ import { mount, flushPromises, VueWrapper } from '@vue/test-utils' import { createPinia, setActivePinia } from 'pinia' import { useAuthStore } from '@/stores/auth' -const mockPush = vi.fn() +const mockPush = vi.fn<(...args: unknown[]) => unknown>() vi.mock('vue-router', () => ({ - useRoute: vi.fn(() => ({ params: {} })), - useRouter: vi.fn(() => ({ push: mockPush })), + useRoute: vi.fn<() => { params: Record }>(() => ({ params: {} })), + useRouter: vi.fn<() => { push: typeof mockPush }>(() => ({ push: mockPush })), RouterLink: { name: 'RouterLink', props: ['to'], @@ -18,14 +18,14 @@ vi.mock('vue-router', () => ({ }, })) -const mockGET = vi.fn() -const mockDELETE = vi.fn() +const mockGET = vi.fn<(...args: unknown[]) => unknown>() +const mockDELETE = vi.fn<(...args: unknown[]) => unknown>() vi.mock('@/lib/api/client', () => ({ default: { GET: (...args: unknown[]) => mockGET(...args), - POST: vi.fn(), - PATCH: vi.fn(), + POST: vi.fn<(...args: unknown[]) => unknown>(), + PATCH: vi.fn<(...args: unknown[]) => unknown>(), DELETE: (...args: unknown[]) => mockDELETE(...args), }, })) diff --git a/web/src/views/admin/__tests__/AdminSystemView.test.ts b/web/src/views/admin/__tests__/AdminSystemView.test.ts index 6a901f66..4412e139 100644 --- a/web/src/views/admin/__tests__/AdminSystemView.test.ts +++ b/web/src/views/admin/__tests__/AdminSystemView.test.ts @@ -24,7 +24,9 @@ const unhealthyDoctor = { // Stub the openapi-fetch client used by the component. vi.mock('@/lib/api/client', () => ({ default: { - GET: vi.fn().mockResolvedValue({ data: null, error: { status: 500 } }), + GET: vi + .fn<(...args: unknown[]) => unknown>() + .mockResolvedValue({ data: null, error: { status: 500 } }), }, })) diff --git a/web/tsconfig.app.json b/web/tsconfig.app.json index dccf2bd6..4be693aa 100644 --- a/web/tsconfig.app.json +++ b/web/tsconfig.app.json @@ -9,7 +9,6 @@ "noUncheckedIndexedAccess": true, // Path mapping for cleaner imports. - "baseUrl": ".", "paths": { "@/*": ["./src/*"] }, diff --git a/web/tsconfig.json b/web/tsconfig.json index 1702e9dd..a3e245f9 100644 --- a/web/tsconfig.json +++ b/web/tsconfig.json @@ -12,7 +12,6 @@ } ], "compilerOptions": { - "baseUrl": ".", "paths": { "@/*": ["./src/*"] } From 9f97f27056db2d5e5a5df5718142aab4d52694a6 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 10 May 2026 07:57:38 +0000 Subject: [PATCH 2/7] docs: add task DAG for phase 9 remediation plan Captures the dependency structure of the 2026-03-10 phase 9 health review remediation plan: prerequisites, intra-stage ordering (1.11 after all other Stage 1, 2A.2 after 2A.1, 2B.1 after 1.11, 2C wiring after 2B refactors, 3.x after the OpenAPI gate and 3.0 reference, 6C after Stage 3), the topological layers a coordinator can fan out, and the critical path. --- ...10-phase9-health-review-remediation-dag.md | 172 ++++++++++++++++++ 1 file changed, 172 insertions(+) create mode 100644 dev/plans/2026-03-10-phase9-health-review-remediation-dag.md diff --git a/dev/plans/2026-03-10-phase9-health-review-remediation-dag.md b/dev/plans/2026-03-10-phase9-health-review-remediation-dag.md new file mode 100644 index 00000000..128bfe5d --- /dev/null +++ b/dev/plans/2026-03-10-phase9-health-review-remediation-dag.md @@ -0,0 +1,172 @@ +# Phase 9 Health Review Remediation — Task DAG + +Dependency graph for `dev/plans/2026-03-10-phase9-health-review-remediation-plan.md`. + +Sources of edges: +- Stage Overview "Dependency graph" block in the plan (lines 43–51) +- Stage 1 prologue: "Task 1.11 must execute AFTER all other Stage 1 tasks have been committed" (line 83) +- Stage 2B prologue: "These tasks clean up the evaluator internals BEFORE wiring it into the runtime (Stage 2C)" (line 961) +- Task 2B.1 body: post-filter target type is `generated.CVE`, "type was renamed from `Cfe` to `CVE` by Task 1.11" (line 980) → 2B.1 ⟵ 1.11 +- Task 2A.2 body: query asserts via `tdb.AppStore` (the `NOBYPASSRLS` role) → 2A.2 ⟵ 2A.1 +- Task 2C.2 body: realtime hook fires after merge once batch/EPSS jobs are registered → 2C.2 ⟵ 2C.1 +- Task 6C body: "DEFERRED — depends on Stage 3 completing" (line 2761) +- Stage 3 body: "implementation plan for the revised Stage 3 will be written after the OpenAPI evaluation gate completes" (line 1282) + +## Mermaid graph + +```mermaid +graph TD + P8["Phase 8 merges
(8B Observe, 8C Operate, 8D, 8E)"]:::prereq + + %% ── Stage 1 ────────────────────────────────────────── + subgraph S1["Stage 1: Quick Wins"] + T1_1["1.1 Close api.Server"] + T1_2["1.2 Close stdlib DB wrappers"] + T1_3["1.3 Validate COOKIE_SECURE"] + T1_4["1.4 Worker pool ctx cancel"] + T1_5["1.5 Remove dead readTx"] + T1_6["1.6 Fix GetCVEDetail comment"] + T1_7["1.7 Add missing assertion"] + T1_8["1.8 Stop discarding setup errors"] + T1_9["1.9 DownloadToTemp pkg state"] + T1_10["1.10 Validate InCISAKEV bool"] + T1_12["1.12 Dedup toNullString"] + T1_11["1.11 sqlc rename Cfe → CVE
(after all other Stage 1)"]:::ordering + end + + %% ── Stage 2A ───────────────────────────────────────── + subgraph S2A["Stage 2A: RLS Security"] + T2A_1["2A.1 Restricted app DB role"] + T2A_2["2A.2 RLS cross-tenant test"] + end + + %% ── Stage 2B ───────────────────────────────────────── + subgraph S2B["Stage 2B: Evaluator Refactor"] + T2B_1["2B.1 Extract post-filter"] + T2B_2["2B.2 Merge queryCandidates"] + end + + %% ── Stage 2C ───────────────────────────────────────── + subgraph S2C["Stage 2C: Alert Wiring"] + T2C_1["2C.1 Schedule batch + EPSS jobs"] + T2C_2["2C.2 Realtime post-merge hook"] + end + + %% ── Stage 3 ────────────────────────────────────────── + GATE["OpenAPI evaluation gate
(see proposal doc)"]:::gate + subgraph S3["Stage 3: API Contract Convergence"] + T3_0["3.0 Reference: Groups"] + T3_1["3.1 Saved Searches"] + T3_2["3.2 API Keys"] + T3_3["3.3 Channels"] + T3_4["3.4 Watchlists"] + T3_5["3.5 Alert Rules"] + T3_6["3.6 Deliveries (#43)"] + T3_7["3.7 Reports"] + T3_8["3.8 Orgs (#34)"] + T3_9["3.9 Members + Invitations"] + T3_10["3.10 Audit Log"] + T3_11["3.11 Admin Endpoints"] + T3_12["3.12 Feeds Admin"] + T3_CLEAN["Post-migration cleanup
(delete orgFetch + writeJSON)"] + end + + %% ── Stage 4 ────────────────────────────────────────── + subgraph S4["Stage 4: Ops Hardening"] + T4D["4D Notification semaphore eviction"] + T4E["4E Configurable statement timeout"] + end + + %% ── Stage 5 ────────────────────────────────────────── + subgraph S5["Stage 5: Test Quality"] + T5A["5A Feed adapter golden tests"] + T5B["5B Ingest handler integration test"] + T5C["5C Email testcontainer"] + T5D["5D Advisory lock concurrency test"] + end + + %% ── Stage 6 ────────────────────────────────────────── + subgraph S6["Stage 6: Architecture"] + T6A["6A ServerDeps options struct"] + T6B["6B Notification worker health"] + T6E["6E merge.Store interface"] + T6F["6F BootstrapFirstUserOrg refactor"] + T6C["6C Extract buildApp()
(deferred)"]:::deferred + T6D["6D ~~import-bulk for NVD~~
INVALIDATED"]:::invalidated + end + + %% ── Edges ──────────────────────────────────────────── + P8 --> S1 + P8 --> S2A + P8 --> S2B + P8 --> S2C + P8 --> GATE + P8 --> S4 + P8 --> S5 + P8 --> S6 + + T1_1 --> T1_11 + T1_2 --> T1_11 + T1_3 --> T1_11 + T1_4 --> T1_11 + T1_5 --> T1_11 + T1_6 --> T1_11 + T1_7 --> T1_11 + T1_8 --> T1_11 + T1_9 --> T1_11 + T1_10 --> T1_11 + T1_12 --> T1_11 + + T2A_1 --> T2A_2 + + T1_11 --> T2B_1 + + T2B_1 --> T2C_1 + T2B_2 --> T2C_1 + T2C_1 --> T2C_2 + + GATE --> T3_0 + T3_0 --> T3_1 --> T3_2 --> T3_3 --> T3_4 --> T3_5 --> T3_6 --> T3_7 --> T3_8 --> T3_9 --> T3_10 --> T3_11 --> T3_12 --> T3_CLEAN + + T3_CLEAN --> T6C + + classDef prereq fill:#fce4a6,stroke:#a06b00,color:#3a2a00 + classDef ordering fill:#e8d5ff,stroke:#5b27a8,color:#22094a + classDef gate fill:#d4edff,stroke:#1f6feb,color:#0a2540 + classDef deferred fill:#f0f0f0,stroke:#999,color:#444 + classDef invalidated fill:#f7f7f7,stroke:#bbb,color:#888,stroke-dasharray: 4 3 +``` + +## Topological layers (parallel-execution view) + +A subagent coordinator can fan out each layer in parallel; later layers wait for the prior layer's edges. + +| Layer | Tasks | Notes | +|-------|-------|-------| +| L0 | Phase 8 merges | Prerequisite — out of scope for this plan | +| L1 | 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 1.10, 1.12 · 2A.1 · 2B.2 · 4D · 4E · 5A · 5B · 5C · 5D · 6A · 6B · 6E · 6F · OpenAPI gate | All independent. Stage 1 (excl. 1.11) is the prime parallel batch. | +| L2 | 1.11 · 2A.2 · 3.0 | 1.11 waits on all other Stage 1; 2A.2 waits on 2A.1; 3.0 waits on the OpenAPI gate. | +| L3 | 2B.1 | Needs the `generated.CVE` rename from 1.11. | +| L4 | 2C.1 | Needs both 2B.1 and 2B.2 complete. | +| L5 | 2C.2 | Sequential after 2C.1. | +| L6 | 3.1 → 3.2 → 3.3 → 3.4 → 3.5 → 3.6 → 3.7 → 3.8 → 3.9 → 3.10 → 3.11 → 3.12 | Sequential migrations following the 3.0 reference; each is one commit per the plan. | +| L7 | Stage 3 post-migration cleanup | After 3.12. | +| L8 | 6C (extract `buildApp()`) | Deferred until Stage 3 is complete and stable. | + +Tasks not on the critical path: 4D, 4E, 5A–5D, 6A, 6B, 6E, 6F can finish at any point after L1. + +## Critical path + +`Phase 8 → {1.1–1.10, 1.12} → 1.11 → 2B.1 → 2C.1 → 2C.2` + +Stage 3 has its own parallel critical path gated by the OpenAPI evaluation: + +`Phase 8 → OpenAPI gate → 3.0 → 3.1 → … → 3.12 → cleanup → 6C` + +The two paths don't intersect, so they can progress concurrently once Phase 8 lands. + +## Resolved / invalidated (no DAG nodes) + +- **Findings 3, 10, 11, 38** — resolved by Phase 8 or already correct; no task. +- **Task 6D (Finding 19)** — invalidated; NVD has no bulk download archives. +- **Task 4A, 4B, 4C** — 4A/4B subsumed by Phase 8; 4C moved into Stage 6 as 6C. From 7267fa9845fe194f6339be3192b7d01eea07d0fd Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 10 May 2026 08:05:16 +0000 Subject: [PATCH 3/7] docs: correct phase 9 remediation DAG after adversarial review Removes superseded Stage 3 tasks 3.0-3.12 and replaces them with a single external-plan node pointing at the 2026-03-15 stage 3 convergence plan. Removes the fabricated 2C.1->2C.2 edge so the two runtime-wiring tasks are siblings under 2B. Drops T6D from the graph to match the resolved/invalidated text. Demotes the 1.11->2B.1 edge from a dedicated arrow to a soft-conflict note. Splits the Phase 8 prerequisite into 8B/8C/8D/8E with per-task dotted edges. Adds a soft-conflicts table for file-level overlaps in L1. Recomputes the critical path as two independent chains, with chain B's depth set by the external Stage 3 plan. --- ...10-phase9-health-review-remediation-dag.md | 182 +++++++++++------- 1 file changed, 108 insertions(+), 74 deletions(-) diff --git a/dev/plans/2026-03-10-phase9-health-review-remediation-dag.md b/dev/plans/2026-03-10-phase9-health-review-remediation-dag.md index 128bfe5d..a4b5e884 100644 --- a/dev/plans/2026-03-10-phase9-health-review-remediation-dag.md +++ b/dev/plans/2026-03-10-phase9-health-review-remediation-dag.md @@ -2,21 +2,26 @@ Dependency graph for `dev/plans/2026-03-10-phase9-health-review-remediation-plan.md`. -Sources of edges: -- Stage Overview "Dependency graph" block in the plan (lines 43–51) -- Stage 1 prologue: "Task 1.11 must execute AFTER all other Stage 1 tasks have been committed" (line 83) -- Stage 2B prologue: "These tasks clean up the evaluator internals BEFORE wiring it into the runtime (Stage 2C)" (line 961) -- Task 2B.1 body: post-filter target type is `generated.CVE`, "type was renamed from `Cfe` to `CVE` by Task 1.11" (line 980) → 2B.1 ⟵ 1.11 -- Task 2A.2 body: query asserts via `tdb.AppStore` (the `NOBYPASSRLS` role) → 2A.2 ⟵ 2A.1 -- Task 2C.2 body: realtime hook fires after merge once batch/EPSS jobs are registered → 2C.2 ⟵ 2C.1 -- Task 6C body: "DEFERRED — depends on Stage 3 completing" (line 2761) -- Stage 3 body: "implementation plan for the revised Stage 3 will be written after the OpenAPI evaluation gate completes" (line 1282) +Sources of edges (line refs into the plan): +- Stage Overview "Dependency graph" block (lines 43–51) +- Stage 1 prologue: 1.11 must run after all other Stage 1 tasks (line 83) +- Stage 2B prologue: 2B finishes before Stage 2C wires into runtime (line 961) +- Task 2A.2: cross-tenant test asserts via `tdb.AppStore` (the restricted role enabled by 2A.1) (lines 929–941) +- Task 6C: "DEFERRED — depends on Stage 3 completing" (line 2761) +- Stage 3 wrapper: tasks 3.0–3.12 are inside a `
` block marked **superseded — Do not execute** (lines 1280, 1284, 1912). The actual Stage 3 work lives in `dev/plans/2026-03-15-phase9-stage3-api-contract-convergence-plan.md`. +- Per-pillar Phase 8 notes in the prerequisites table (lines 17–28) and per-task warnings (e.g. 6A line 2589, 6B line 2660, 5A appendix line 3001, 2B.1/2B.2 lines 969, 1079). ## Mermaid graph ```mermaid graph TD - P8["Phase 8 merges
(8B Observe, 8C Operate, 8D, 8E)"]:::prereq + %% ── Phase 8 prerequisite (per pillar) ─────────────── + subgraph P8["Phase 8 merges (prerequisite)"] + P8B["8B Observe
(metrics, instrumentation)"]:::prereq + P8C["8C Operate
(/healthz, /readyz, doctor, admin)"]:::prereq + P8D["8D Generic feed adapter"]:::prereq + P8E["8E (other operational work)"]:::prereq + end %% ── Stage 1 ────────────────────────────────────────── subgraph S1["Stage 1: Quick Wins"] @@ -47,29 +52,14 @@ graph TD end %% ── Stage 2C ───────────────────────────────────────── - subgraph S2C["Stage 2C: Alert Wiring"] + subgraph S2C["Stage 2C: Alert Wiring (parallel siblings)"] T2C_1["2C.1 Schedule batch + EPSS jobs"] T2C_2["2C.2 Realtime post-merge hook"] end - %% ── Stage 3 ────────────────────────────────────────── - GATE["OpenAPI evaluation gate
(see proposal doc)"]:::gate - subgraph S3["Stage 3: API Contract Convergence"] - T3_0["3.0 Reference: Groups"] - T3_1["3.1 Saved Searches"] - T3_2["3.2 API Keys"] - T3_3["3.3 Channels"] - T3_4["3.4 Watchlists"] - T3_5["3.5 Alert Rules"] - T3_6["3.6 Deliveries (#43)"] - T3_7["3.7 Reports"] - T3_8["3.8 Orgs (#34)"] - T3_9["3.9 Members + Invitations"] - T3_10["3.10 Audit Log"] - T3_11["3.11 Admin Endpoints"] - T3_12["3.12 Feeds Admin"] - T3_CLEAN["Post-migration cleanup
(delete orgFetch + writeJSON)"] - end + %% ── Stage 3 (gate only — implementation lives elsewhere) ── + GATE["OpenAPI evaluation gate
(timeboxed, in-plan)"]:::gate + S3EXT["Stage 3 implementation
(external plan:
2026-03-15-phase9-stage3-
api-contract-convergence-plan.md)"]:::external %% ── Stage 4 ────────────────────────────────────────── subgraph S4["Stage 4: Ops Hardening"] @@ -92,19 +82,37 @@ graph TD T6E["6E merge.Store interface"] T6F["6F BootstrapFirstUserOrg refactor"] T6C["6C Extract buildApp()
(deferred)"]:::deferred - T6D["6D ~~import-bulk for NVD~~
INVALIDATED"]:::invalidated end - %% ── Edges ──────────────────────────────────────────── - P8 --> S1 - P8 --> S2A - P8 --> S2B - P8 --> S2C - P8 --> GATE - P8 --> S4 - P8 --> S5 - P8 --> S6 - + %% ── Phase 8 prerequisite edges (whole-stage gating) ── + P8B --> S1 + P8B --> S2A + P8B --> S2B + P8B --> S2C + P8B --> GATE + P8B --> S4 + P8B --> S5 + P8C --> S1 + P8C --> S2A + P8C --> S2B + P8C --> S2C + P8C --> GATE + P8C --> S4 + P8C --> S5 + P8D --> S5 + P8E --> S1 + P8E --> S2A + + %% ── Phase 8 prerequisite edges (per-task call-outs) ── + P8C -.->|adds Server deps captured by ServerDeps| T6A + P8C -.->|exposes /readyz target| T6B + P8D -.->|generic adapter covered by golden tests| T5A + P8B -.->|metric instrumentation may shift| T2B_1 + P8B -.->|metric instrumentation may shift| T2B_2 + P8B -.->|alert metrics activate once wired| T2C_1 + P8B -.->|alert metrics activate once wired| T2C_2 + + %% ── Stage 1 fan-in to 1.11 ─────────────────────────── T1_1 --> T1_11 T1_2 --> T1_11 T1_3 --> T1_11 @@ -117,56 +125,82 @@ graph TD T1_10 --> T1_11 T1_12 --> T1_11 + %% ── Stage 2A internal edge ─────────────────────────── T2A_1 --> T2A_2 - T1_11 --> T2B_1 - + %% ── Stage 2B → 2C (both 2B tasks must complete) ────── T2B_1 --> T2C_1 T2B_2 --> T2C_1 - T2C_1 --> T2C_2 - - GATE --> T3_0 - T3_0 --> T3_1 --> T3_2 --> T3_3 --> T3_4 --> T3_5 --> T3_6 --> T3_7 --> T3_8 --> T3_9 --> T3_10 --> T3_11 --> T3_12 --> T3_CLEAN - - T3_CLEAN --> T6C - - classDef prereq fill:#fce4a6,stroke:#a06b00,color:#3a2a00 - classDef ordering fill:#e8d5ff,stroke:#5b27a8,color:#22094a - classDef gate fill:#d4edff,stroke:#1f6feb,color:#0a2540 - classDef deferred fill:#f0f0f0,stroke:#999,color:#444 - classDef invalidated fill:#f7f7f7,stroke:#bbb,color:#888,stroke-dasharray: 4 3 + T2B_1 --> T2C_2 + T2B_2 --> T2C_2 + + %% ── Stage 3 gate → external plan → 6C ──────────────── + GATE --> S3EXT + S3EXT --> T6C + + classDef prereq fill:#fce4a6,stroke:#a06b00,color:#3a2a00 + classDef ordering fill:#e8d5ff,stroke:#5b27a8,color:#22094a + classDef gate fill:#d4edff,stroke:#1f6feb,color:#0a2540 + classDef external fill:#dff5e0,stroke:#1a7f37,color:#0a2a12 + classDef deferred fill:#f0f0f0,stroke:#999,color:#444 ``` +Solid arrows = hard ordering required by the plan. Dotted arrows = pillar-specific pre-conditions / metric instrumentation hand-offs called out in the plan body. + ## Topological layers (parallel-execution view) -A subagent coordinator can fan out each layer in parallel; later layers wait for the prior layer's edges. +A subagent coordinator can fan out each layer in parallel; later layers wait for the prior layer's edges. **Read the soft-conflicts section before dispatching L1 in parallel.** | Layer | Tasks | Notes | |-------|-------|-------| -| L0 | Phase 8 merges | Prerequisite — out of scope for this plan | -| L1 | 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 1.10, 1.12 · 2A.1 · 2B.2 · 4D · 4E · 5A · 5B · 5C · 5D · 6A · 6B · 6E · 6F · OpenAPI gate | All independent. Stage 1 (excl. 1.11) is the prime parallel batch. | -| L2 | 1.11 · 2A.2 · 3.0 | 1.11 waits on all other Stage 1; 2A.2 waits on 2A.1; 3.0 waits on the OpenAPI gate. | -| L3 | 2B.1 | Needs the `generated.CVE` rename from 1.11. | -| L4 | 2C.1 | Needs both 2B.1 and 2B.2 complete. | -| L5 | 2C.2 | Sequential after 2C.1. | -| L6 | 3.1 → 3.2 → 3.3 → 3.4 → 3.5 → 3.6 → 3.7 → 3.8 → 3.9 → 3.10 → 3.11 → 3.12 | Sequential migrations following the 3.0 reference; each is one commit per the plan. | -| L7 | Stage 3 post-migration cleanup | After 3.12. | -| L8 | 6C (extract `buildApp()`) | Deferred until Stage 3 is complete and stable. | - -Tasks not on the critical path: 4D, 4E, 5A–5D, 6A, 6B, 6E, 6F can finish at any point after L1. +| L0 | 8B Observe · 8C Operate · 8D · 8E | Phase 8 prerequisite — out of scope for this plan | +| L1 | 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 1.10, 1.12 · 2A.1 · 2B.1 · 2B.2 · 4D · 4E · 5A · 5B · 5C · 5D · 6A · 6B · 6E · 6F · OpenAPI gate | All independent per the plan's dependency block. 2A.1, 2B.1, 2B.2 have no stated Stage 1 prerequisites, so they belong here. | +| L2 | 1.11 · 2A.2 | 1.11 waits on the Stage 1 fan-in. 2A.2 waits on 2A.1. | +| L3 | 2C.1 · 2C.2 | Both wait on 2B.1 + 2B.2. They are siblings — the plan does not require one before the other. | +| L4 | Stage 3 implementation (external plan) | Waits on the OpenAPI gate's outcome. | +| L5 | 6C (extract `buildApp()`) | Waits on the external Stage 3 plan landing. | + +`6D` is excluded entirely — invalidated, no node in the graph. + +## Soft conflicts (file-level, not logical) + +These pairs are independent in the plan's dependency model but touch the same file. Dispatching them simultaneously will produce merge conflicts; sequence them in the queue. + +| Pair | Shared file | Conflict | +|------|-------------|----------| +| 4D ↔ 6B | `internal/notify/worker.go` | Both add fields to `Worker` struct and modify `Start()` | +| 1.12 ↔ 6E | `internal/merge/pipeline.go` | toNullString call sites vs. `merge.Store` interface signature change | +| 1.4 ↔ 2C.1 | worker pool registration sites | Context-cancel fix vs. new `alert_batch`/`alert_epss`/`alert_zombie_sweep` handlers | +| 5B ↔ 2C.2 | `internal/ingest/handler.go` | Integration test vs. realtime-eval hook on the same handler | +| 1.11 ↔ 2B.1, 5B, 6E, Stage 3 work | every file importing `generated.Cfe` | 1.11 mass-renames the type. Tasks that write code referencing the type pre-rename will need a trivial rebase — not a hard dep, but a real coordination cost. The plan resolves this by sequencing 1.11 last in Stage 1 before later stages start writing new code against the type. | +| 6A ↔ 8C-derived setters | `internal/api/server.go`, `cmd/cvert-ops/main.go` | If Phase 8C added new `Set*Deps` methods, 6A must absorb them too (called out in plan §6A Step 2 note). | ## Critical path -`Phase 8 → {1.1–1.10, 1.12} → 1.11 → 2B.1 → 2C.1 → 2C.2` +There are two largely independent chains; the plan does not connect them, and the second is only partially defined here. + +**Chain A (alert wiring):** + +``` +P8 (8B + 8C + 8E) → {Stage 1 batch, longest task} → 1.11 → {2B.1 ∥ 2B.2} → {2C.1 ∥ 2C.2} +``` + +The `2C.x` fan-out at the end means the chain-A bottleneck is `max(2C.1, 2C.2)` after Stage 2B completes — neither blocks the other. + +**Chain B (API contract convergence):** -Stage 3 has its own parallel critical path gated by the OpenAPI evaluation: +``` +P8 (8B + 8C) → OpenAPI gate → external Stage 3 plan (Tasks 0–14b) → 6C +``` -`Phase 8 → OpenAPI gate → 3.0 → 3.1 → … → 3.12 → cleanup → 6C` +Chain B's true length is set by `2026-03-15-phase9-stage3-api-contract-convergence-plan.md`, which has 14+ tasks of its own. From this plan's perspective the depth is unknown; treat Chain B as the project critical path until the external plan's own DAG is summarized. -The two paths don't intersect, so they can progress concurrently once Phase 8 lands. +The two chains share only the Phase 8 prerequisite, so they run concurrently after L0. Stages 4, 5, 6A, 6B, 6E, 6F are off the critical path entirely — they can land any time after their Phase 8 pillar is in. -## Resolved / invalidated (no DAG nodes) +## Resolved / invalidated (excluded from the graph) -- **Findings 3, 10, 11, 38** — resolved by Phase 8 or already correct; no task. -- **Task 6D (Finding 19)** — invalidated; NVD has no bulk download archives. -- **Task 4A, 4B, 4C** — 4A/4B subsumed by Phase 8; 4C moved into Stage 6 as 6C. +- **Findings 3, 10, 11, 38** — resolved by Phase 8 or already correct; no task ever existed. +- **Tasks 4A, 4B** — subsumed by Phase 8B/8C; removed from Stage 4 scope. +- **Task 4C** — moved into Stage 6 as 6C (already a node). +- **Task 6D (Finding 19)** — invalidated; NVD has no bulk download archives. Not a node. +- **Original Tasks 3.0–3.12** — superseded; lives behind `
` in the plan with "Do not execute." Not nodes; replaced by the single `S3EXT` node pointing at the external implementation plan. From 9997ccd1cc8596468178bf6920dce1e4fbc295b2 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 10 May 2026 08:06:19 +0000 Subject: [PATCH 4/7] docs: clarify DAG scope is inter-task ordering only Adds a scope note that intra-task ordering (e.g. 6B's TDD sequence) is not modeled here. Closes the last open item from the adversarial review. --- dev/plans/2026-03-10-phase9-health-review-remediation-dag.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/dev/plans/2026-03-10-phase9-health-review-remediation-dag.md b/dev/plans/2026-03-10-phase9-health-review-remediation-dag.md index a4b5e884..78e92827 100644 --- a/dev/plans/2026-03-10-phase9-health-review-remediation-dag.md +++ b/dev/plans/2026-03-10-phase9-health-review-remediation-dag.md @@ -1,6 +1,8 @@ # Phase 9 Health Review Remediation — Task DAG -Dependency graph for `dev/plans/2026-03-10-phase9-health-review-remediation-plan.md`. +Inter-task dependency graph for `dev/plans/2026-03-10-phase9-health-review-remediation-plan.md`. + +**Scope:** ordering between named tasks (e.g. `1.11 → 2C.1`). Intra-task ordering — TDD steps inside a single task body, such as 6B's "scaffolding → stub → failing test → real impl → wire to readiness" — is not modeled here. Read the task body in the plan for those details. Sources of edges (line refs into the plan): - Stage Overview "Dependency graph" block (lines 43–51) From dbd9cf435a136a877eea4aa8e50a300de4cdd20d Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 10 May 2026 09:40:29 +0000 Subject: [PATCH 5/7] docs(skill): add extracting-plan-dag Production-quality methodology skill for extracting a plan's inter-task dependency structure into a queryable, derived artifact. Chains after plan-review-cycle when the execution model warrants it. Adopts the conventions of writing-plans-enhanced + plan-review-cycle + handoff: RFC 2119 terminology, runner-MUST/SHOULD prescriptive voice, core discipline, multi-round adversarial review, red flags, common rationalizations, checklist, social proof, related conventions, the bottom line. Distinguishes gc projects (Gas City / Beads-backed orchestrator load-bearing) from non-gc projects and adjusts mandatoriness of the gate (Phase 1) and tracker sync (Phase 8) accordingly. Bridges the phase-granularity Living Document Contract banners to the task-granularity DAG nodes via parent_phase metadata. Defines a plan-revision protocol mapped to LDC events (claim, ship, defer, deviation, discovery, stale-claim reclaim, banner inconsistency). Authority flows plan -> DAG -> tracker; never the other way. --- .claude/skills/extracting-plan-dag/SKILL.md | 613 ++++++++++++++++++++ 1 file changed, 613 insertions(+) create mode 100644 .claude/skills/extracting-plan-dag/SKILL.md diff --git a/.claude/skills/extracting-plan-dag/SKILL.md b/.claude/skills/extracting-plan-dag/SKILL.md new file mode 100644 index 00000000..4265887a --- /dev/null +++ b/.claude/skills/extracting-plan-dag/SKILL.md @@ -0,0 +1,613 @@ +--- +name: extracting-plan-dag +description: Extract the inter-task dependency structure from a written plan into a queryable, derived artifact. Chains after plan-review-cycle when the execution model warrants it — multi-builder concurrent dispatch, or any project where a Beads-backed orchestrator (e.g. Gas City) is load-bearing. Methodology-focused — task tracker and graph format are adapter points, not assumptions. Detects gc / non-gc projects and adjusts mandatoriness accordingly. +--- + +# Extracting Plan DAG + +## Terminology + +The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", +"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this +document are to be interpreted as described in RFC 2119. + +A "**gc project**" is any project where Gas City (or another +Beads-backed orchestrator) is load-bearing — the orchestrator +dispatches work by reading from Beads and atomic-claims issues on +behalf of agents. A "**non-gc project**" is everything else: the plan +markdown plus the Living Document Contract (per `writing-plans-enhanced` +Step 5) is sufficient runtime state. + +## Overview + +Force every plan that warrants it to declare its inter-task dependency +structure explicitly, then make that structure queryable by whatever +coordinator dispatches the work. The plan markdown stays as the source +of truth and archival record; the DAG is a derived view co-located +with the plan; the tracker (Beads or otherwise), if used, is a runtime +cache. + +Edits flow plan → DAG → tracker. Never the other way. + +**Core principles (two asymmetries):** + +1. **Cheap to extract, expensive to reconstruct ad hoc.** A plan's + inter-task dependency structure exists whether or not it's written + down. If it's not written down, every coordinator (and every future + reader) reconstructs it from prose, often inconsistently. The cost + of one rigorous extraction beats N noisy re-extractions, multiplied + across every dispatch the plan receives. + +2. **Wrong DAG is worse than no DAG.** A DAG that ships with fabricated + edges, missed dependencies, or superseded nodes promoted as live + gives downstream coordinators false confidence. The asymmetry favors + adversarial review over speed: a half-DAG quietly corrupts dispatch + decisions; the fix surfaces only when builders collide. Err toward + more review. + +## When to use + +- After `plan-review-cycle` completes with zero findings on a plan + written by `writing-plans-enhanced`. +- On any **gc project**, regardless of plan size — Gas City needs every + plan in Beads to dispatch from it. +- On non-gc projects when the execution model is "Parallel agents" + (3+ concurrent builders on independent tracks) per + `writing-plans-enhanced` Step 2. +- On non-gc projects when the execution model is "Subagent-driven" AND + the plan has ≥15 tasks AND ≥1 phase has internal parallelism. +- Whenever an existing plan grows (new phases added, new builders + introduced) such that its execution model changes after initial + authoring. + +## When NOT to use + +- On non-gc projects with execution model "Parallel session" (one + builder, sequential checkpoints). The Living Document Contract's + banners are sufficient runtime state and the DAG is overhead. +- On research / exploratory plans where structure is itself uncertain. + Premature DAG-ification freezes structure that should remain fluid. +- Before `plan-review-cycle` has produced a zero-finding round. A DAG + built from an unreviewed plan inherits the plan's defects with + amplification. +- When the plan lacks `Files:` sections per task. This skill MUST + refuse to produce a DAG in that case (see Core discipline §2). + +## Prerequisites + +The runner MUST verify ALL of the following before any other step: + +1. The plan was written by `writing-plans-enhanced` and carries the + Living Document Contract block, per-phase Execution Status banners, + and `Files:` sections per task. +2. `plan-review-cycle` has been run to completion (a round produced + zero findings) against the **current** plan content. If the plan + has been revised after the prior `plan-review-cycle` run, the + runner MUST require a fresh `plan-review-cycle` pass before + proceeding. +3. The plan's execution strategy (selected in `writing-plans-enhanced` + Step 2) is recorded in or near the plan, so the gate (Phase 1) can + read it. +4. The runner has determined whether the project is gc or non-gc by + checking project markers. Common markers include: a `.gc/` + directory at the repo root, a Beads database file (typically + `.beads/` or a SQLite file referenced in project config), a + `gas-city` or `bd` configuration block in the project's main + config, or an explicit setting in `CLAUDE.md` or the project's + equivalent. The runner MUST cite which marker(s) it found. If no + marker is found and the project type is genuinely ambiguous, the + runner MUST ask the user before proceeding. + +If any prerequisite is missing or ambiguous, the runner MUST STOP and +request remediation. Extracting against an unreviewed plan or a plan +without `Files:` sections produces a half-DAG and false confidence. + +## Core discipline + +A DAG extraction MUST do five things. Skipping any one degrades the +extraction into a sketch that should not be committed. + +1. **Cite every edge.** Every hard edge in the DAG MUST be traceable + to a specific line, paragraph, or quoted phrase in the plan. + Uncited edges are fabricated; the runner MUST delete them. + +2. **Treat every `Files:` overlap.** For every pair of tasks that + share a file path in their `Files:` sections, the runner MUST + classify the relationship as either a hard edge (with citation) or + a soft conflict (recorded separately). A pair that ends up in + neither category is a missed dependency. If any task lacks a + `Files:` section, the runner MUST refuse to produce a DAG. + +3. **Detect superseded content.** Before edges are extracted, the + runner MUST scan the plan for `
` blocks, "REVISED", + "SUPERSEDED", "Do not execute", strikethroughs, deferred-phase + banners (⏸) that reroute to other plans, and similar markers. + Tasks inside superseded sections are NOT DAG nodes. Missing this + step corrupts the entire downstream graph. + +4. **Bridge phase-level banners to task-level nodes.** The Living + Document Contract specifies banners at the **phase** level + (⬜ / 🚧 / ✅ / ⏸), but DAG nodes are at the **task** level — one + phase contains multiple task nodes. The runner MUST capture, for + each task node, the parent phase's current banner. The tracker + (Phase 8), if used, holds the finer-grained per-task state. The + phase banner is derived from the aggregate of its task states (any + task in 🚧 → banner is 🚧; all tasks in ✅ → banner flips to ✅; an + external blocker on the phase as a whole → banner is ⏸). The plan + banner wins on disagreement — the plan is the archival source of + truth. + +5. **Run minimum 4 rounds of adversarial review.** Three canonical + perspectives — Citation auditor, Coverage auditor, Inference- + discipline auditor — each targeting a specific failure mode this + skill exists to prevent (fabricated edges, missed dependencies, + inferred-not-cited ordering). Plus at least one plan-specific + perspective the runner chooses based on the plan's character. + Additional rounds MAY be run; they SHOULD be run when any earlier + round produced material findings or when the plan's content + suggests further perspectives would catch additional issues. See + Phase 7 for round structure and loop rules. + +## Process + +### Phase 1: Gate decision + +The runner MUST evaluate whether this skill should run before any +other step. + +| Project type | Execution strategy | Action | +|---|---|---| +| gc | any | RUN (mandatory; tracker sync is required for the orchestrator to dispatch) | +| non-gc | Parallel agents (3+ concurrent builders on independent tracks) | RUN | +| non-gc | Subagent-driven | RUN if the plan has roughly 15+ tasks AND at least one phase has internal parallelism. Otherwise SKIP. The 15-task threshold is a heuristic, not a hard cutoff — a 12-task plan with heavy fan-out warrants extraction, while a 25-task plan that's strictly sequential does not. | +| non-gc | Parallel session (one builder, sequential checkpoints) | SKIP — banners alone suffice | + +If the runner SKIPs, it MUST add a one-line note inside the plan +("DAG extraction skipped: ; revisit if execution model +changes") so future readers see the gate was considered. + +If the runner RUNs, it proceeds to Phase 2. + +### Phase 2: Extract hard edges + +A hard edge is "task B MUST NOT start until task A's commit lands." +Sources, in descending order of authority: + +1. **Explicit dependency blocks** in the plan (e.g. "Stage Overview: + Dependencies"). Authoritative — the runner copies these verbatim. +2. **Task body sentences** containing "depends on", "must complete + after", "after X is committed", "before Y starts." +3. **Type/symbol creation chains.** If Task A creates a type, + interface, schema element, migration, or shared helper that Task B + references, A blocks B. +4. **Phase prologues/epilogues** that gate batches of tasks. + +The runner MUST NOT promote to a hard edge: + +- "It's cleaner to do X before Y" — preference, not blocker. +- "X may shift after Y" — heads-up, not blocker. +- "X is similar to Y" — relationship, not edge. +- Numeric task ordering within a phase — assume parallel unless the + plan explicitly says otherwise. + +For each edge, the runner MUST record: +- source task ID +- target task ID +- a citation: line number, section reference, or quoted phrase from + the plan justifying the edge + +If a citation cannot be produced, the edge is not real and MUST be +discarded. + +### Phase 3: Extract soft conflicts + +A soft conflict is "tasks A and B touch the same file but neither +depends on the other." Soft conflicts are NOT edges in the DAG. They +are separate metadata used at dispatch time to prevent parallel +builders from serializing into merge conflicts. + +For each task, the runner MUST list every file path under its +`Files:` section (Create / Modify / Test). For each file, the runner +MUST collect the set of tasks that touch it. Any pair within that set +without a hard edge between them is a soft conflict. + +The runner MUST record soft conflicts as a separate table, NOT as +edges in the graph. Coordinators treat them as mutual-exclusion locks +at dispatch time, not as ordering constraints. + +If any task lacks a `Files:` section, the runner MUST STOP and refuse +to produce a DAG. The fix belongs upstream in `writing-plans-enhanced`, +not here. + +### Phase 4: Extract per-node metadata + +For each task, the runner MUST collect: + +- **`priority`** — security / correctness / quality / cleanup +- **`blast_radius`** — single-file / package / codebase-wide +- **`kind`** — code / research / design-decision / review-gate +- **`effort`** — if the plan provides it; otherwise omit +- **`external_blockers`** — references to other plans, manual approvals, + upstream events outside this plan's scope +- **`parent_phase`** — the phase this task belongs to (so the DAG can + associate the task with the phase whose banner governs it) +- **`parent_phase_banner`** — current Execution Status banner of the + parent phase (⬜ / 🚧 / ✅ / ⏸), read from the plan markdown + +Per-node metadata is NOT used for edge construction. It is used by the +coordinator to decide WHICH ready node to dispatch next, not WHEN it +becomes ready. + +### Phase 5: Detect plan-structural hazards + +The runner MUST re-read the plan with these specific eyes: + +- **Superseded sections** (Core discipline §3). Tasks inside `
` + blocks marked "SUPERSEDED" or similar are excluded from the graph + entirely. +- **Mass-rename / freeze events.** Tasks described as "large blast + radius", "every file that imports X", "must execute after all other + tasks in this batch." Flag as freeze points; coordinators serialize + against them. +- **External plan handoffs.** Phrases like "implementation plan is in + another document" or "see proposal doc" mean that phase is a single + external-reference node, not its inline task list. The runner MUST + NOT inline external task lists. +- **Non-task tasks.** Research timeboxes, design gates, manual review + approvals. Tag with `kind:research` or `kind:gate` so coordinators + do not dispatch them as code work. +- **Banner state.** Per Phase 4, capture each phase's current Execution + Status banner. + +### Phase 6: Render the DAG artifact + +The runner MUST write the DAG to `-dag.md` (e.g. +`docs/superpowers/plans/2026-04-08-mcp-tools-plan-dag.md`). Co-location +keeps plan and DAG paired in directory listings, code review diffs, +and any tooling that walks the plans directory. + +The artifact MUST contain: + +1. A scope statement: "models inter-task ordering only; intra-task + ordering (TDD steps, sub-step sequencing) is not modeled." +2. Every edge cited back to the plan (line number or quoted phrase). +3. The soft-conflicts table. +4. The per-node metadata table including each node's parent phase and + parent-phase banner state. +5. A topological-layers view showing which tasks fan out together at + each layer. +6. Freeze events and external handoffs listed explicitly. +7. An "excluded from graph" section for superseded, invalidated, and + resolved-by-prerequisite tasks, each with a cited reason. +8. A pointer to the plan's Living Document Contract noting that the + DAG records the parent-phase banner per node, that fine-grained + per-task state lives in the tracker (if Phase 8 was performed), and + that the plan banner wins on disagreement. + +Format choice: + +- **Mermaid** is the default human-facing format because it renders + inline on GitHub and is readable in plain text. Most projects + SHOULD use Mermaid unless they have a specific reason not to. +- **Graphviz/DOT** is acceptable for projects that already render DOT + elsewhere and want a single rendering toolchain. +- **Plain structured text** (YAML/JSON only, no diagram) is acceptable + when the artifact is consumed primarily by tooling and the human + view comes from the tracker. + +Whatever the human-facing format, the artifact MUST also contain a +machine-readable form (YAML or JSON sidecar, or a fenced code block +beneath the diagram) that the tracker adapter (Phase 8) reads. The +two MUST be derived from the same source data — divergence between +the diagram and the structured form is a defect. + +### Phase 7: Adversarial review (minimum 4 rounds, until zero findings) + +The first-pass DAG is wrong. The runner MUST re-read the artifact +adversarially. + +Run these rounds sequentially, documenting findings at each: + +**Round 1 — Citation auditor.** Audit every edge in the graph. Can +each one cite a specific plan line, section, or quoted phrase that +justifies it? If a citation cannot be produced, the edge is fabricated +and MUST be deleted. Walk the entire graph; do not skip "obvious" +edges. + +**Round 2 — Coverage auditor.** Re-read every `Files:` section in the +plan. For each file that appears in more than one task, verify the +relationship is captured either as a hard edge (with citation) or a +soft conflict. Pairs that appear in neither are missed dependencies +and MUST be added. Also re-scan for superseded sections, external +handoffs, and freeze events; any that were missed in Phase 5 MUST be +captured now. + +**Round 3 — Inference-discipline auditor.** Walk the graph hunting for +edges the runner inferred from numeric ordering, narrative flow, or +"obvious sequence" rather than from a plan citation. Numbered tasks +are siblings unless the plan says otherwise. A → B → C MUST appear +only if the plan explicitly orders them. Strike any edge whose +justification reduces to "they're listed in this order." + +**Round 4 — Plan-specific perspective (runner-chosen).** Rounds 1-3 +cover known-in-general failure modes. This plan has its own character +— security-heavy, schema-heavy, frontend-heavy, methodology-novel, +cross-plan-coupled, something else — and that character has its own +failure modes the canonical rounds will not catch. The runner MUST +choose a perspective specifically relevant to what this plan actually +contains and review from it. + +Requirements for the Round 4 perspective choice: + +- MUST be a perspective not already covered by Rounds 1-3. +- MUST be specifically relevant to THIS plan, not a generic auditor + template. If the plan is auth-heavy, "security gate auditor" is + legitimate; if the plan is pure refactoring, it isn't. +- MUST be named and described explicitly in the DAG artifact under a + heading like `### Round 4 — [chosen perspective] — [N findings + applied]`, so future readers can see the reasoning. +- SHOULD be concrete enough to produce findings. "General quality + pass" is too vague; "cross-plan handoff fidelity to the external + Stage 3 plan" is actionable. + +**Loop rule (applies to all rounds).** If any round produces material +findings, the runner MUST re-run every round in sequence after applying +fixes. Fixes can surface issues earlier rounds missed or introduce new +issues those rounds would have caught. Exit only when a full pass +through every round (1-3 canonical + Round 4 + any additional rounds +the runner elected to run) produces zero material findings. + +**Additional rounds (5+) — encouraged when warranted.** 4 is the floor, +not a ceiling. Run further rounds if the plan has unusual structural +risk, cross-plan dependencies, or a freeze event with broad scope. +Each additional round MUST be named and described like Round 4 and +MUST NOT duplicate a prior round's lens. + +### Phase 8: Sync to a queryable substrate + +Mandatoriness depends on project type and execution model. + +| Project type & execution model | Phase 8 status | +|---|---| +| gc (any execution model) | MANDATORY — Gas City reads from Beads to dispatch work; the orchestrator is non-functional without sync | +| non-gc, "Parallel agents" (3+ concurrent builders) | RECOMMENDED — cross-phase ready-queue queries pay back the sync cost | +| non-gc, "Subagent-driven" (≥15 tasks AND parallelism) | OPTIONAL — banner system suffices for most cases; sync only if cross-plan visibility or finer-grained queries are wanted | +| non-gc, "Parallel session" or below the gate threshold | N/A — Phase 1 should have skipped this skill entirely | + +The DAG → tracker step is tool-specific. This skill defines the adapter +contract; it does not specify the tracker. + +When sync is performed, the adapter MUST: + +- Create one issue per node, keyed `-` (deterministic + so re-runs are idempotent). +- Encode hard edges as blocker dependencies in the tracker's native + format. +- Encode soft conflicts as `mutex:` labels on each side of + every conflict pair. +- Encode per-node metadata as labels (`priority:`, + `blast_radius:`, `kind:`, etc.). +- **Encode parent-phase banner state on each node**: ⬜ → open; + 🚧 → in-progress (with claim timestamp + branch if available); + ✅ → closed-shipped with the shipping SHA; ⏸ → blocked, with the + prose unblock condition AND the link from the plan's banner. The + tracker holds per-task state at finer granularity; the parent phase + banner is recorded per node so the tracker can render either view. +- Create already-closed anchor issues for prerequisite work outside + this plan's scope (shipped phases of upstream plans, completed + prerequisites) so cross-plan dependency queries still resolve + correctly. +- Mark superseded and invalidated nodes as closed-on-creation with a + reason field. + +The adapter MUST NOT: + +- Propagate tracker edits back to the DAG or plan. Authority flows + plan → DAG → tracker, never the other way. +- Invent dependencies the DAG didn't declare. +- Skip the closed-anchor pattern for prerequisites — silent gaps in + the dependency graph become silent gaps in `ready`-queue queries. + +Sync MUST be idempotent: re-running the skill regenerates tracker +state deterministically from the plan + DAG. The runner SHOULD verify +idempotency by re-running the sync immediately after the first run +and confirming the tracker reports zero changes (no new issues, no +modified labels, no edge churn). If a second run produces changes, +the adapter is non-deterministic and the divergence MUST be diagnosed +before relying on the tracker for dispatch. + +### Phase 9: Plan-revision protocol + +The Living Document Contract from `writing-plans-enhanced` Step 5 +specifies events that update the plan. Each event has a defined DAG +action. + +| Plan event | DAG action | +|---|---| +| Phase claim — non-gc (⬜ → 🚧 banner update) | No structural DAG change. If a tracker is in use, the tracker MUST update the affected nodes' parent-phase banner state. | +| Phase claim — gc (Beads claim on a task issue, no banner change) | No structural DAG change and no banner update; gc owns claim state in Beads. The phase banner stays ⬜ until the phase ships. | +| Phase ship — non-gc (🚧 → ✅) | No structural change. Shipping commit MUST update both the banner and the tracker (if used) atomically. | +| Phase ship — gc (⬜ → ✅; banner skipped 🚧 entirely) | No structural change. Shipping commit MUST update both the banner and the Beads issue atomically. | +| Phase defer (→ ⏸) | If the unblock condition references a NEW external dependency, the runner MUST re-run Phase 2 to record the `external_blocker`. The banner's prose + link is the durable coordination signal; the DAG mirrors it. | +| Stale-claim reclaim (per writing-plans-enhanced Step 5) | The runner MUST update the tracker node's claim timestamp and branch; prior claim history is preserved per the reclaim protocol. | +| Deviation (scope edit, dropped task, reordered phase) | The runner MUST re-run Phases 2-7 on the affected sub-graph and update the artifact. If the deviation changes plan structure substantially, the runner MUST require a fresh `plan-review-cycle` pass before re-extracting. | +| Discovery (new task added) | The runner MUST add the new node, re-extract its edges, and re-run Phase 7 on its neighbors. | +| Banner-state internal inconsistency detected (e.g., a phase shows ✅ while a hard-prerequisite phase still shows ⬜) | The runner MUST flag this as a defect in the plan, NOT silently reconcile it. Surface to the user; the plan is the source of truth and must be repaired before re-extraction proceeds. | + +The runner MUST NOT silently delete tracker issues for removed nodes. +They MUST be closed with reason "superseded by plan revision " +so future dispatches see the transition trail. + +Plan revisions are a common failure mode for this workflow — banner +state drifts, scope edits are not propagated to the DAG, and +downstream readers consume a stale graph. Treating revisions as +normal events with a defined protocol — not exceptions — is what +keeps the DAG honest over time. + +**Detecting that a plan was revised since the last DAG extraction.** +The runner SHOULD compare the plan file's git history against the DAG +artifact's last-modified commit. If the plan has commits newer than +the DAG, treat the DAG as potentially stale and re-run the affected +phases. The runner MAY add a one-line comment to the DAG artifact +(e.g. ``) so +future readers can audit alignment without git archaeology. + +### Phase 10: Log to the pattern store + +Following `plan-review-cycle`'s post-completion convention, the runner +SHOULD log to the project's pattern store (private journal, MCP store, +dated `docs/learnings/` file, or whatever the project uses): + +- **Type:** pattern +- **Key:** `dag-extraction-[plan-slug]` +- **Insight:** Plan-shape patterns observed (sequential vs parallel; + freeze events; superseded sections; cross-plan handoffs; banner + conventions that rendered ambiguously). Recurring extraction-time + discoveries SHOULD feed back into `writing-plans-enhanced` if a + pattern keeps appearing. + +## Red flags (STOP) + +These mean the extraction is not yet complete or correct: + +- "The plan ordering is obvious" — Then cite the line that says so. If + you can't, it's not an edge, it's an inference. +- "These tasks are clearly sequential because they're numbered" — + Numbered tasks are siblings unless the plan orders them. Strike the + inferred edges. +- "The `Files:` section was missing for one task; I worked around it" + — Refuse and surface the gap upstream. A workaround silently fails + to detect soft conflicts for the missing task. +- "The superseded section is short; I'll just include those tasks + anyway" — Promoting superseded content corrupts the entire + downstream graph. Exclude. +- "I'll skip Phase 8 sync; the user can run it later" — On gc projects, + no Beads sync means Gas City can't dispatch this plan. Skip is not + an option. +- "One review round is enough; the DAG is small" — Small DAGs are + cheaper to review, not exempt from review. Run the four rounds. +- "The plan revised mid-extraction; I'll just patch the affected + edges" — Re-run the affected phases (Phase 9). Patches accumulate + drift. +- "The banner says ⏸ but I'll model it as ⬜ so it shows up in ready + queues" — Authority is plan → DAG → tracker; if the plan banner is + wrong, fix the plan first, then re-extract. +- "I can't find a session-specific perspective for Round 4" — Try + harder. If you genuinely can't, document the attempt explicitly per + Phase 7's Round 4 requirements; don't silently skip. + +## Common rationalizations (rebuttals) + +| Rationalization | Reality | +|---|---| +| "The plan is small; the DAG is overhead" | Phase 1's gate handles this. Either the gate says skip (legitimate) or the gate says run (do it). Don't override the gate with vibes. | +| "The plan author already declared the dependencies in prose" | Prose declarations are not queryable. Extracting them into a structured form is the entire point of this skill. | +| "Citing every edge slows me down" | Uncited edges are the failure mode this skill exists to prevent. The cost of one careful pass beats the cost of a wrong DAG corrupting downstream dispatch. | +| "Soft conflicts are obvious from the `Files:` sections; I don't need to enumerate them" | Coordinators don't read `Files:` sections at dispatch time. They query the soft-conflicts table. Implicit conflicts become merge conflicts. | +| "I'll write the artifact and skip the adversarial review; my first pass is good" | Single-pass extraction misses fabricated edges, missed soft conflicts, and superseded content promoted as live. The handoff skill's review discipline applies here for the same reasons. | +| "The plan revision is small; I'll just edit the artifact directly" | Edits without re-extracting Phases 2-7 introduce drift the next reader can't trust. Re-run the affected phases. | +| "Beads is overkill for this project" | On gc projects, the orchestrator can't dispatch without it — that's not aesthetic, it's a hard requirement. On non-gc projects, the Phase 8 table determines status (recommended for "Parallel agents", optional otherwise) — and "optional" genuinely means optional. Don't dismiss the requirement on gc; don't force the recommendation on non-gc. | +| "The banner discipline is enough; we don't need a DAG" | True for "Parallel session" execution and small plans. False for "Parallel agents" and gc. Phase 1's gate captures this. | + +## Checklist + +Before declaring the extraction complete, verify: + +- [ ] All four prerequisites verified: plan written by + `writing-plans-enhanced`, `plan-review-cycle` complete with zero + findings, execution strategy known, gc / non-gc determined. +- [ ] Phase 1 gate decision recorded (RUN or SKIP with reason). +- [ ] Every hard edge has a plan citation (line number or quoted + phrase). +- [ ] Every `Files:`-section overlap is captured as either a hard + edge or a soft conflict; no orphan pairs. +- [ ] Every task has per-node metadata recorded, including current + banner state. +- [ ] All superseded sections are excluded from the graph and listed + in the artifact's "excluded" section with reasons. +- [ ] Freeze events and external plan handoffs are flagged explicitly. +- [ ] The DAG artifact is at `-dag.md` and contains all + eight required sections (scope statement, edges, soft conflicts, + metadata, layers, freezes/handoffs, exclusions, LDC pointer). +- [ ] At least 4 adversarial review rounds complete (3 canonical + + Round 4 plan-specific; additional rounds run as judgment + suggested); the final full pass through every round produced + zero material findings. +- [ ] Round 4 (and any 5+ the runner elected to run) is documented by + name in the artifact with its findings count; perspective choice + is plan-specific, not a generic template. +- [ ] On gc projects: Phase 8 sync to Beads is performed; idempotency + is verified by running the sync a second time and confirming + zero changes. On non-gc: sync is performed if recommended by the + Phase 8 table, or skipped with a note in the artifact. +- [ ] Pattern-store log entry written (per Phase 10). +- [ ] Artifact committed in the same commit (or commit chain) that + lands the plan, so plan and DAG stay paired in git history. + +## Social proof + +Observed across multi-agent coordination cycles in plans of +sufficient size and parallelism: DAG extraction reduces dispatch +prompts from "figure out which tasks are unblocked given the current +plan state" reads to short pointer sequences. With Beads as an +example tracker, that looks like: `bd ready` returns N unblocked +issues; mutex labels show two of them both touch +`internal/notify/worker.go`, so the coordinator dispatches one and +queues the other. The principle is tracker-agnostic — the same +shape of query against any structured tracker yields the same +short-prompt dispatch. + +DAGs that ship without adversarial review create the opposite: a +fabricated edge sends a builder onto work that isn't actually ready; +a missed soft conflict produces parallel branches that collide at +merge; a superseded section promoted as live pushes builders onto +work the plan author marked "do not execute." Every one of those +failure modes was observed in real plan executions before this +skill's discipline was codified. + +The cost asymmetry favors rigorous extraction by a wide margin and +compounds across every dispatch the plan receives. A plan with three +agents over three days takes ~9 builder-cycles from the DAG; a wrong +DAG poisons all of them. + +## Related conventions + +- **`writing-plans-enhanced`** is the upstream that produces the plan + this skill consumes. The Living Document Contract (its Step 5) + defines the banner format that this skill mirrors. If + `writing-plans-enhanced`'s contract changes, this skill's Phase 4 + and Phase 9 SHOULD be updated to match. + +- **`plan-review-cycle`** is the immediate prerequisite. This skill + refuses to run before `plan-review-cycle` produces a zero-finding + round. The two are designed to chain. + +- **gc / non-gc determination.** A project is gc if Gas City (or + another Beads-backed orchestrator) is load-bearing. The detection + mechanism (`.gc/`, Beads database, project setting) is project-local; + this skill assumes the convention is recorded somewhere the runner + can check. + +- **Banner format and stale-claim reclaim.** Banner conventions and + the reclaim protocol come from `writing-plans-enhanced` Step 5. This + skill does not redefine them; it consumes them as input. + +- **Strategy & rationale.** The decision framework for gc vs non-gc + handling, why banners and Beads divide LDC events the way they do, + and the wider context for this skill's design SHOULD be documented + in a project-local strategy doc (e.g. + `dev/research-findings/dag-extraction-and-orchestration.md` or + whatever the project uses for methodology research). + +## The bottom line + +A plan's inter-task dependency structure exists whether or not it's +written down. If it's not written down, every coordinator reconstructs +it from prose and gets it slightly wrong each time. Extract once, +adversarially review, sync to whatever queryable substrate the +orchestration needs, mirror banner state on revision. The cost is one +session; the saving compounds across every dispatch the plan receives. + +If a downstream coordinator dispatches work that turns out to be +blocked, the DAG failed. If `bd ready` (or the equivalent) returns +exactly the set of tasks a careful human would, it succeeded. From 16ee022fe77bfac8337041d1bc77ee1d19062310 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 10 May 2026 09:40:38 +0000 Subject: [PATCH 6/7] docs(dag): retrofit phase 9 DAG with methodology pointers + review log MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The phase 9 DAG was produced before the methodology was codified. Adds a header pointer to the new extracting-plan-dag skill and the dag-extraction-and-orchestration strategy doc, and an Adversarial review section that documents the nine rounds the DAG actually went through during its initial production (recovered from the conversation arc). Round 4's plan-specific perspective is documented as "
-block / supersession audit" — chosen because the plan revised Stage 3 mid-flight and wrapped the original task list in a "Do not execute" details block; the audit caught 13 superseded tasks that had been promoted as live nodes. --- ...-10-phase9-health-review-remediation-dag.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/dev/plans/2026-03-10-phase9-health-review-remediation-dag.md b/dev/plans/2026-03-10-phase9-health-review-remediation-dag.md index 78e92827..3bd860e3 100644 --- a/dev/plans/2026-03-10-phase9-health-review-remediation-dag.md +++ b/dev/plans/2026-03-10-phase9-health-review-remediation-dag.md @@ -4,6 +4,8 @@ Inter-task dependency graph for `dev/plans/2026-03-10-phase9-health-review-remed **Scope:** ordering between named tasks (e.g. `1.11 → 2C.1`). Intra-task ordering — TDD steps inside a single task body, such as 6B's "scaffolding → stub → failing test → real impl → wire to readiness" — is not modeled here. Read the task body in the plan for those details. +**Methodology:** this DAG was extracted before the methodology was codified. The standardized procedure is now documented in `.claude/skills/extracting-plan-dag/SKILL.md` and the rationale for the gc / non-gc split it embeds is in `dev/research-findings/dag-extraction-and-orchestration.md`. Re-extractions (e.g. on plan revision) SHOULD follow the skill's process going forward. + Sources of edges (line refs into the plan): - Stage Overview "Dependency graph" block (lines 43–51) - Stage 1 prologue: 1.11 must run after all other Stage 1 tasks (line 83) @@ -206,3 +208,19 @@ The two chains share only the Phase 8 prerequisite, so they run concurrently aft - **Task 4C** — moved into Stage 6 as 6C (already a node). - **Task 6D (Finding 19)** — invalidated; NVD has no bulk download archives. Not a node. - **Original Tasks 3.0–3.12** — superseded; lives behind `
` in the plan with "Do not execute." Not nodes; replaced by the single `S3EXT` node pointing at the external implementation plan. + +## Adversarial review + +This DAG went through nine rounds of review during its initial production, summarized retrospectively against the standardized rounds in `.claude/skills/extracting-plan-dag/SKILL.md` Phase 7. Findings counts are approximate, recovered from the conversation arc rather than logged at the time. + +| Round | Lens | Findings applied | +|---|---|---| +| 1 | Citation auditor — every edge cites a plan line | 4 fabricated edges removed (incl. an unjustified `2C.1 → 2C.2`) | +| 2 | Coverage auditor — `Files:`-overlap pairs captured | 6 soft-conflict pairs added (4D↔6B, 1.12↔6E, 1.4↔2C.1, 5B↔2C.2, 1.11 mass-rename row, 6A↔8C-derived setters) | +| 3 | Inference-discipline auditor — no edges from numeric ordering | Numeric-order edge from `2C.1 → 2C.2` deleted (was inferred, not cited) | +| 4 | Plan-specific perspective: **`
` block / supersession audit** — chosen because the plan revised Stage 3 mid-flight and wrapped the original task list in a `
` block marked "Do not execute" | 13 superseded tasks (3.0–3.12 + cleanup) removed from the graph; replaced with a single external-plan node `S3EXT` and a re-routed `T6C` dependency | +| 5+ | Loop check — graph/text contradictions, dangling edges, scope-clarification gaps | DAG-scope statement added; `T6D` removed entirely (graph said "node," text said "excluded"); critical-path claim recomputed as two independent chains; Phase 8 split into 8B/8C/8D/8E with per-task dotted edges; 1.11→2B.1 demoted from a graph edge to a soft-conflict row | + +The Round 4 perspective was specifically motivated by this plan's mid-flight Stage 3 revision (the `
` block at lines 1280–1912 of the plan). On a plan without such revisions, a different Round 4 perspective would have applied — that's why the skill mandates plan-specific choice rather than a fixed canonical lens. + +A final loop pass produced zero material findings. From 06c16b12895c453967006384065b0ba1d3aefe38 Mon Sep 17 00:00:00 2001 From: Claude Date: Sun, 10 May 2026 09:40:49 +0000 Subject: [PATCH 7/7] docs(research): strategy + context for DAG extraction and gc/non-gc split MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Records the strategy converged on across the discussion that produced extracting-plan-dag: how the Living Document Contract, DAG extraction, and Beads-backed trackers (Gas City) layer; who has authority for what; and how the workflow stays the same on gc and non-gc projects with a small mechanical delta. Key claims: - Three tools, three layers, one direction of authority: plan -> DAG -> tracker. Tracker edits never propagate back. - The Living Document Contract stays even when Beads exists. Beads is a runtime tool; LDC banners are an archival record. They record different things. - The gc-specific reduction is small and concrete: skip the in-progress (claim) banner update because Beads has the claim atomically; force DAG extraction and Phase 8 sync. Everything else is the same workflow. - The DAG stays even when banners are sufficient — for solo or sequential plans Phase 1 of the skill skips it; for parallel agents or gc projects it runs unconditionally. Inspired in structure by the handoff skill: methodology focused, prescriptive voice, asymmetries called out, common failure modes mapped to preventive substrates. --- .../dag-extraction-and-orchestration.md | 434 ++++++++++++++++++ 1 file changed, 434 insertions(+) create mode 100644 dev/research-findings/dag-extraction-and-orchestration.md diff --git a/dev/research-findings/dag-extraction-and-orchestration.md b/dev/research-findings/dag-extraction-and-orchestration.md new file mode 100644 index 00000000..e55d90d1 --- /dev/null +++ b/dev/research-findings/dag-extraction-and-orchestration.md @@ -0,0 +1,434 @@ +# DAG Extraction and Multi-Agent Orchestration: Strategy & Context + +## Terminology + +The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", +"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this +document are to be interpreted as described in RFC 2119. + +A "**gc project**" is any project where Gas City (or another +Beads-backed orchestrator) is load-bearing — the orchestrator +dispatches work by reading from Beads and atomic-claims issues on +behalf of agents. Gas City is non-functional without a populated, +current Beads database. + +A "**non-gc project**" is everything else: the plan markdown plus +the Living Document Contract from `writing-plans-enhanced` Step 5 +(per-phase Execution Status banners + stale-claim reclaim protocol) +is sufficient runtime state. + +## Why this doc exists + +Plan execution coordination is a hard problem. Several attempts to +solve it have produced complementary tools that overlap in awkward +ways: + +- The **Living Document Contract** evolved through trial and error + with multi-agent coordination cycles. It keeps plan markdown + honest as execution progresses — banners flip ⬜ → 🚧 → ✅ or → ⏸, + deviations get inlined, discoveries get captured. It works well + for solo and small-team execution and produces excellent archival + records. + +- **Beads** (and orchestrators built on it like Gas City) provides + atomic claim, globally-visible runtime state, and structured + ready-queue queries. It solves the worktree-divergence problem + that LDC banners hit when 3+ builders concurrently update the + same plan markdown file. + +- The **DAG extraction skill** (`.claude/skills/extracting-plan-dag/`) + forces a plan's inter-task dependency structure to be made + explicit and queryable. It chains after `plan-review-cycle` and + produces a co-located DAG artifact. + +These three tools answer overlapping questions ("what's the runtime +state of this work?" "what's actionable now?" "what's the dependency +structure?") at different layers and with different durability +profiles. Without a clear strategy, agents on a project either: + +- Use Beads where LDC would suffice and accept the extra tooling + burden; +- Use LDC where Beads would prevent worktree-divergence pain and + accept the merge-conflict tax; +- Use both inconsistently and spend cognitive overhead reconciling + state across substrates. + +This doc records the strategy converged on across discussion: how +the three tools layer, who has authority for what, and how the +workflow stays the same on gc and non-gc projects with a small, +mechanical delta. + +## Core principles (two asymmetries) + +1. **Cheap to layer correctly, expensive to reconcile after the + fact.** Each tool has a defined role and direction of authority. + Setting that up at plan-writing time is cheap. Discovering mid- + execution that two substrates disagree about a phase's state is + expensive and erodes trust in both. + +2. **Same workflow, small mechanical delta.** A builder on a non-gc + project and a builder on a gc project SHOULD use nearly the same + skills, the same banner conventions, the same plan format. The + gc-specific behavior SHOULD be a small reduction (skip a banner + transition gc handles in Beads) plus an automatic Phase 8 sync + triggered by gc-project detection. Anything more grows the + maintenance burden of keeping two workflows aligned. + +## The three tools, their roles, their authority + +| Tool | Role | Authority over... | +|---|---|---| +| Plan markdown + Living Document Contract | Source of truth + archival record | Phase-granularity state, deviations, discoveries, narrative context, why work was done a certain way | +| DAG (derived view, co-located with plan) | Inter-task dependency structure | Edges, soft conflicts, per-node metadata, exclusion of superseded content | +| Beads / tracker (runtime cache, optional on non-gc) | Live coordination layer | Per-task state at finer granularity than banners, atomic claim, ready-queue queries | + +**Direction of authority flows plan → DAG → tracker. Never the other +way.** Tracker edits don't propagate back to the plan. Banner state +in the plan is authoritative on disagreement. The plan is what gets +read in archaeology a year from now. + +## What each tool is good at (and not) + +### Plan markdown + LDC banners + +**Good at:** +- Co-located narrative + state. A banner sits above its task body; + any reader sees state before reading the task. +- Archival record. A year later, the plan tells the story of what + shipped, what got deferred, what was discovered. +- Tool-independent. Just markdown in git. Survives the death of any + external tracking tool. +- Self-propagating discipline. The contract is in the plan; every + session that opens the plan inherits the rules. + +**Not good at:** +- Atomic claim across worktrees. Two builders updating banners in + different worktrees → eventual consistency at merge time, with + line-level conflicts on the same banner. +- Cross-plan ready queries. To know what's actionable across a + project, you read N markdown files. +- Structured queries. "What's blocked on Sam's review?" is a grep, + not a typed filter. +- Fine-grained per-task state. Banners are at phase granularity; + per-task state during a phase is invisible. + +### DAG (the artifact this skill produces) + +**Good at:** +- Forcing implicit dependencies to become explicit. The act of + building it is the value, regardless of whether a tracker + consumes it. +- Catching plan defects early. Citation discipline surfaces + fabricated edges before they corrupt downstream dispatch. +- Cross-tool portability. Same DAG can be consumed by Gas City, + by another tracker, or by a coordinator reading the markdown + directly. + +**Not good at:** +- Live runtime state. The DAG is structural, not stateful. The + parent-phase banner state on each node is a snapshot, not a + live signal. +- Continuous synchronization. The skill prescribes re-extraction on + plan revision, but the DAG can drift between revisions if events + happen mid-stream without a re-run. + +### Beads / tracker (when present) + +**Good at:** +- Atomic claim. `bd claim` is race-free across worktrees in a way + banner-edit never can be. +- Ready-queue queries. `bd ready` returns exactly the unblocked + set, across all plans synced. +- Structured per-task state. Status, labels, blocker dependencies, + queryable filters. +- Mutex-via-labels. Soft conflicts encoded as `mutex:` + labels become first-class dispatch-time signals. + +**Not good at:** +- Narrative context. Beads issues record events; they don't tell + the story of why a phase was deferred or what was discovered + along the way. +- Long-term archival. A closed Beads issue is greppable but the + plan markdown is the durable artifact. +- Tool-portability. A workflow that depends on Beads commands + doesn't transfer to a project without Beads. + +## When to use which combination + +| Scenario | Plan + LDC | DAG | Tracker | +|---|---|---|---| +| Solo builder, sequential plan | Yes | Skip (gate) | Skip | +| Solo builder, large parallelizable plan | Yes | Yes | Optional | +| 2 builders, parallel work | Yes | Yes | Optional but useful | +| 3+ builders concurrent ("Parallel agents") | Yes | Yes | Recommended | +| gc project, any size | Yes | Yes (mandatory) | Yes (mandatory; this is what gc dispatches from) | + +The gate that decides DAG extraction lives in +`.claude/skills/extracting-plan-dag/` Phase 1. The gate that decides +tracker sync lives in the same skill's Phase 8. Both gates are +project-and-execution-model-dependent, not aesthetic. + +## The gc / non-gc split: how the tools divide LDC events + +The Living Document Contract specifies five event types. Each is +allocated to whichever substrate handles it best, and the allocation +differs slightly between gc and non-gc projects. + +| LDC event | Non-gc handling | gc handling | +|---|---|---| +| Phase claim (⬜ → 🚧) | Banner edit + stale-claim reclaim protocol | Beads claim only — **no 🚧 banner update** (banner stays ⬜ until ship) | +| Phase ship (→ ✅) | Banner edit in shipping commit | Banner edit AND `bd close` in shipping commit (atomic) | +| Phase defer (→ ⏸) | Banner edit with prose unblock condition + link | Banner edit AND `bd block` with the same prose + link | +| Deviation | Plan inline + top-of-plan summary | Same — plan inline + `bd comment` cross-link | +| Discovery | Plan "Discoveries" subsection | Same — plan inline + new `bd issue` if discovery becomes a task | + +The gc-specific reduction is concrete and small: **skip the 🚧 +banner update because Beads has the claim atomically and globally.** +Everything else stays the same. + +This gives gc projects: +- **Worktree-divergence on banners largely evaporates** because the + noisy mid-flight 🚧 updates are gone. Ship/defer/deviation banner + updates are infrequent and on different phases — minimal merge + friction. +- **Atomic claim** without any banner contention. +- **Live cross-plan visibility** via Beads. +- **Archival narrative preserved** — ship, defer, deviation, + discovery still hit the plan markdown in the same commit as the + work. + +And it gives non-gc projects: +- **Unchanged LDC discipline** — the contract is exactly the same + as it always was. +- **Optional tracker sync** when 3+ builders concurrent execution + benefits from cross-phase ready-queue queries. + +## Detection mechanism + +Skills SHOULD NOT ask the user "are you on gc?" each invocation. +Detect once via a project marker. Common markers: + +- A `.gc/` directory at the repo root. +- A Beads database file (typically `.beads/` or a SQLite file + referenced in project config). +- A `gas-city` or `bd` configuration block in the project's main + config. +- An explicit setting in `CLAUDE.md` or the project's equivalent. + +Detection happens in `writing-plans-enhanced` and propagates to the +chained skills (`plan-review-cycle`, `extracting-plan-dag`). When a +skill needs to know, it reads the project marker rather than asking. + +If detection is genuinely ambiguous, the skill asks the user once +and records the answer in the project marker for next time. + +## Worktree-divergence: what each tool does about it + +**The problem:** with 3+ builders in 3+ worktrees, each updating +banners in their own copy of the plan markdown, you get: + +- Eventual consistency at merge time. +- Two-line conflicts on adjacent banner edits. +- Race conditions on phase claims (two builders both flip ⬜ → 🚧 + in their own worktrees, second push gets rejected; reclaim + protocol then fires reactively). + +**Non-gc mitigation (LDC's reclaim protocol):** +- Detect stale claims by observable git signals (PR existence, + commit recency). +- Reactive cleanup. The race already happened; the protocol + resolves it. +- Works ~80% of the time at low overhead. + +**gc mitigation (Beads atomic claim):** +- Claim is a `bd` operation against a single global database. + Race-free by construction. +- Banner stays ⬜ during execution; only the shipping commit + updates it. No mid-flight banner edits → no merge conflicts on + banner edits. +- Coordination state lives outside the worktree. Worktree markdown + diverges, but the divergence doesn't matter for coordination. + +**Choice:** if you have 3+ concurrent builders regularly, gc is +genuinely better at this and the LDC's reclaim protocol is doing +work that should be unnecessary. If you have 1-2 builders, the LDC +reclaim protocol is sufficient and Beads is overhead. + +## Why the LDC stays — even when Beads exists + +It would be tempting on gc projects to drop the LDC banner +discipline entirely "because Beads has it." That would be a +regression. The LDC is not redundant with Beads; they record +different things: + +- **Beads is a runtime tool.** It tracks live state for orchestrator + dispatch. Issue history is queryable but verbose. +- **LDC banners are an archival record.** A year later, the plan + tells the story. + +The shipping-commit pattern (atomic banner update + bd close in the +same commit) keeps both views consistent without duplicating effort. +A builder shipping work updates one source — the plan banner — and +runs `bd close`. Both happen in the same commit. The plan's +narrative is preserved; Beads' runtime state stays current. + +If banner discipline ever lapses on a gc project, the symptom is +silent: Beads keeps working, the plan's archival quality erodes, +and a year later "what shipped here?" requires Beads archaeology +instead of a plan read. Don't let that happen. + +## Why the DAG stays — even when banners are sufficient + +On small / sequential / single-builder plans, the LDC banners are +genuinely sufficient runtime state. The DAG extraction skill's +Phase 1 gate skips extraction in those cases. + +But the gate is opinionated: + +- **Subagent-driven** with ≥15 tasks AND parallelism → extract. +- **Parallel agents** with 3+ concurrent builders → extract + unconditionally. +- **gc** projects → extract unconditionally regardless of plan + size, because the orchestrator dispatches from Beads which + requires populated tracker issues. + +The gate threshold (~15 tasks) is heuristic. A 12-task plan with +heavy fan-out warrants extraction; a 25-task strictly-sequential +plan does not. The runner exercises judgment. + +When the gate skips, the DAG extraction skill leaves a one-line +note in the plan ("DAG extraction skipped: ; revisit if +execution model changes"). That keeps the decision auditable +without forcing a stub artifact. + +## Common failure modes (and what prevents each) + +| Failure mode | Substrate that causes it | What prevents it | +|---|---|---| +| Banner-edit merge conflicts mid-execution | LDC at 3+ builders | gc adoption (Beads atomic claim) OR fewer concurrent builders | +| Stale plan after several phases ship | LDC discipline lapse | Mandatory banner update in the shipping commit | +| Beads and plan disagreeing about phase state | Builder shipping non-atomically | The LDC + tracker sync pattern: atomic banner-edit + `bd close` in the same commit | +| Wrong DAG corrupting downstream dispatch | DAG extraction without adversarial review | Phase 7 of the DAG skill — minimum 4 rounds, plan-specific Round 4, loop until zero | +| Superseded plan content promoted as live work | First-pass DAG extraction missing `
` blocks | Core discipline §3 of the DAG skill — explicit superseded-content scan | +| Cross-plan dependencies invisible | Banner-only state on multi-plan projects | Tracker sync (gc-mandatory, non-gc-recommended above 3 builders) | +| Plan revision drifts from DAG | Ad-hoc edits to either artifact | Phase 9 plan-revision protocol — re-run affected DAG phases on each LDC event class | +| Builders working concurrently on same file | Soft conflicts not enumerated | Phase 3 of the DAG skill — `Files:`-section overlap → soft-conflict table | + +## Workflow on a non-gc project (today) + +``` +1. /writing-plans-enhanced + → produces plan with LDC banners and `Files:` sections + → saves to docs/superpowers/plans/YYYY-MM-DD-.md +2. /plan-review-cycle + → minimum 3 rounds, until zero findings +3. /extracting-plan-dag (gate-conditional) + → Phase 1 gate: skip for solo/sequential, run for parallel/large + → if RUN: produces -dag.md with full process + → if SKIP: leaves a one-line note in the plan +4. Execute the plan + → builders update banners as they work (LDC discipline) + → reclaim protocol handles stale claims if any + → DAG re-extraction triggered by LDC events per Phase 9 +``` + +## Workflow on a gc project + +``` +1. /writing-plans-enhanced + → same plan format, same LDC contract + → gc detection happens here; skill records project type + → contract block omits the 🚧 row (gc-mode) +2. /plan-review-cycle + → same as non-gc +3. /extracting-plan-dag (mandatory) + → Phase 1 gate triggers RUN automatically (gc project) + → produces DAG artifact + → Phase 8 sync to Beads is mandatory + → idempotency verified by re-running sync (zero changes) +4. Gas City takes over for dispatch + → reads `bd ready` to find unblocked work + → atomic-claims on agent's behalf + → agents work in worktrees, ship via atomic commits + (banner update + bd close together) + → no mid-flight banner edits → no banner merge conflicts +``` + +The differences are mechanical, not philosophical: + +- gc detection is automatic; builders don't need to remember the + project type. +- 🚧 banner row is omitted from the contract on gc projects so + builders don't see the discipline they don't need. +- Phase 8 sync is automatic on gc; gc detection in the DAG skill + flips it to mandatory. + +Everything else — banner format, plan structure, deviation/discovery +discipline, DAG artifact format, adversarial review rounds — is +identical. + +## When this strategy might be wrong + +Three honest concerns to track over time: + +1. **The "skip 🚧 on gc" subtraction is easy to forget.** Builders + trained on non-gc projects will instinctively flip 🚧 banners. + The cost is benign (visual noise, no correctness issue) but it + dilutes the "Beads is authoritative for claim" rule. Mitigation: + the LDC contract block on gc projects omits the 🚧 row, so the + discipline isn't visible. Watch for builders adding 🚧 anyway; + if it keeps happening, the contract block needs a STOP-style + warning. + +2. **Two sources for ship-time state.** Both the banner and the + Beads issue carry "Phase 3 shipped at SHA." If they disagree, + who wins? The strategy says: Beads is authoritative for + runtime; the banner is archival; on disagreement, repair the + banner from Beads' record. The atomic shipping-commit pattern + prevents the gap, but a lapsed commit (only the banner, only + the bd close) creates one. + +3. **The fork in the skill tree.** Adding gc-mode to skills grows + maintenance burden with each new skill. If the divergence stays + at "skip 🚧 + force DAG extraction + force Phase 8 sync," it's + manageable. If it grows to 5+ deltas across multiple skills, a + single skill with a `mode: gc` parameter would be cleaner. Track + the delta count; reorganize if it crosses ~5. + +## What this strategy does NOT solve + +- **Builder competence.** No coordination tool catches semantically + bad work. Beads can dispatch the right next task; only test-on- + mainline catches whether a builder did the task well. +- **Reviewer bottleneck.** N concurrent builders produce N pending + PRs. Reviewer throughput is the practical ceiling on parallelism, + and no orchestration substrate raises it. +- **Cross-project coordination.** Each project has its own plans, + its own DAGs, its own Beads database. Coordinating work that + spans projects (e.g. a CVErt-Ops plan that depends on a + Gas-City plan) is out of scope here. + +## Related artifacts + +- `.claude/skills/writing-plans-enhanced/SKILL.md` — Living Document + Contract definition (Step 5). +- `.claude/skills/plan-review-cycle/SKILL.md` — Adversarial plan + review, prerequisite to DAG extraction. +- `.claude/skills/extracting-plan-dag/SKILL.md` — DAG extraction + methodology with gc / non-gc handling. +- `dev/plans/2026-03-10-phase9-health-review-remediation-dag.md` — + Worked example of a DAG produced retrospectively against the + methodology. + +## The bottom line + +Three tools, three layers, one direction of authority: plan → DAG → +tracker. Banners stay; Beads adds atomic claim and queryable runtime +state when the project's execution model warrants it. The gc / +non-gc split is small and mechanical: detect once, drop the 🚧 +banner row, force DAG extraction and Phase 8 sync. Everything else +is the same workflow on both. + +If a builder on a gc project has to think about Beads more than +once a session, the strategy failed. If they ship a phase by +updating one banner and running `bd close`, it succeeded.