diff --git a/AGENTS.md b/AGENTS.md index d7ae3b6..1d7f93a 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -7,6 +7,11 @@ This root `AGENTS.md` is the repo-local override for Codex work in this reposito - This repo exists to seed a supported macOS machine with packages, dotfiles, identity or access material, and Codex plugin assets that approximate `@pirog`. - The repo currently ships `boot.sh` as the bootstrap wrapper and `piroplugin` as the Codex plugin bundle. +## Identity Text + +- Use `pirog` for casual prose references to the persona this repo approximates. +- Preserve exact literal strings for connector checks, package metadata, URLs, IDs, and external account display names. + ## Source Of Truth - `boot.sh` is the shipped shell entrypoint and the main bootstrap surface to preserve. @@ -23,6 +28,28 @@ This root `AGENTS.md` is the repo-local override for Codex work in this reposito - Use `bun run codex:validate` for semantic plugin validation, including manifest paths, skill metadata, the MCP stub, and workflow script references. - Treat `dotfiles/ai` as a separate stow-owned surface. Use `bun run ai:sync` for home-directory restow work, not for Codex plugin cache refreshes. +## Me Readiness Maintenance + +- Treat `$piro-me-readiness` as a verification surface for this `me` checkout and macOS user profile. It should not become a token-management, environment-management, GitHub automation, monday automation, setup, release, Leia, or general machine-admin workflow. +- For protected resource access, prefer native Codex connectors such as GitHub and monday when they are available. When a script or skill needs protected resources without a native connector, wrap it with `op run --environment zsstdfqknicwfv5glv76gd6tue` instead of using committed `.env` files, persistent shell environment secrets, or local token fallback. +- Local readiness probes must prove desktop-app-backed 1Password access. Strip 1Password service-account, connect, session, and bootstrap token environment variables from `op` subprocesses so readiness does not pass through token fallback. +- `$piro-me-readiness` may run its bundled read-only local helper unsandboxed by default because it verifies 1Password and Tailscale desktop/daemon readiness. Do not extend that unsandboxed default to unrelated repo commands, setup, package installation, tests, release validation, or broad machine administration. +- Keep README readiness content limited to human bootstrap/manual setup steps and a brief pointer to run `$piro-me-readiness` after setup. Do not put detailed readiness bucket or maintenance policy in README. +- Keep readiness maintenance policy in this `AGENTS.md`. Keep `skills/me-readiness/SKILL.md` focused on how to run readiness, parse helper output, and perform read-only connector identity checks. +- Skill changes under `skills/**` do not automatically require readiness updates. Update readiness only when a skill adds or changes a stable machine prerequisite: a Brewfile dependency, a repo-owned dotfile, a manual app/auth/network step, a Codex plugin install/link surface, or a connector identity requirement. +- When a skill introduces a prerequisite, update the source of truth first, then readiness if the requirement is repo-owned, stable, read-only, and machine-verifiable: + - Brew package or cask requirements belong in `Brewfile`; then update the `packages` bucket only if the readiness helper should assert the new package or command. + - Repo-owned config belongs under the relevant `dotfiles/**` package; then update the `dotfiles` bucket only if readiness should assert the stowed/generated surface. + - Human app/auth/network setup belongs in the README manual setup checklist; then update the `manual_apps` bucket only if a local read-only probe can verify it. + - Codex plugin install or link layout belongs in the plugin/dotfile source of truth; then update the `codex_plugins` bucket only if the local installed surface should be asserted. + - GitHub or monday connector identity changes belong in `SKILL.md` runtime connector guidance, not `check-machine.js`. +- Use these helper buckets only: + - `homebrew`: Homebrew command availability. + - `packages`: Brewfile declarations and required command availability. + - `dotfiles`: repo-owned stowed files and generated local config readiness. + - `manual_apps`: installed apps and local app/auth/network readiness that cannot be fully handled by Brewfile alone, including 1Password and Tailscale. + - `codex_plugins`: local Codex plugin links or plugin install surfaces owned by this repo. + ## Validation Policy - Never run Leia locally. Leia scenarios in this repo are CI-only unless the user explicitly asks for a local Leia run. diff --git a/Brewfile b/Brewfile index 3f9d439..f784cce 100644 --- a/Brewfile +++ b/Brewfile @@ -1,6 +1,8 @@ tap "oven-sh/bun" -cask "1password-cli" +cask "1password" +cask "1password-cli@beta" +cask "tailscale" brew "curl" brew "gh" @@ -10,6 +12,5 @@ brew "jq" brew "node@20" brew "python@3.14" brew "stow" -brew "tailscale" brew "zsh" brew "oven-sh/bun/bun" diff --git a/README.md b/README.md index a9afeb0..ca634ea 100644 --- a/README.md +++ b/README.md @@ -14,8 +14,8 @@ installs core tools and requested SSH keys, materializes `~/tanaab/me`, material `~/tanaab/canon` unless disabled, and then applies the `me` checkout's [`Brewfile`](./Brewfile) plus top-level [`dotfiles/`](./dotfiles/) packages onto `$HOME`. -After bootstrap, open Codex and install the `piroplugin` and `tanaab` plugins from the `Pirostore` -marketplace so their skills are available in the app. +After bootstrap, complete the manual setup checklist so the expected apps, plugins, and connector +auth are available. ## Quickstart @@ -33,8 +33,37 @@ This default flow: - clones `git@github.com:tanaabased/canon.git` into `~/tanaab/canon` - applies the `me` Brewfile and dotpkgs onto `$HOME` -When the script finishes, open Codex and install the `piroplugin` and `tanaab` plugins from -`Pirostore`. +When the script finishes, complete the manual setup checklist below. + +## Manual Setup Checklist + +### 1Password + +- Open 1Password. +- Sign in and unlock it. +- Enable Developer > Integrate with 1Password CLI. +- Enable Developer > Show 1Password Developer experience. +- Use the Brewfile-provided beta 1Password CLI; 1Password Environments require beta CLI support. +- Confirm `op` can access the signed-in account with a read-only check such as `op vault list`. + +### Tailscale + +- Open Tailscale. +- Sign in and connect this machine to the `tanaab.dev` tailnet. +- Confirm `tailscale status --json` reports the local node as running and online. + +### Codex + +- Plugins from `Pirostore`: + - `piroplugin` + - `tanaab` +- Codex app connectors: + - `GitHub`, connected as `pirog` + - `monday.com`, connected as `Michael Pirog` for this `me` environment + +After completing this checklist, ask Codex to run `$piro-me-readiness`. Readiness may trigger +macOS, Codex, or 1Password permission prompts while it verifies local desktop app access. Approve +those prompts only when you intentionally asked Codex to run readiness. ## What Gets Installed @@ -42,7 +71,7 @@ When the script finishes, open Codex and install the `piroplugin` and `tanaab` p [`Brewfile`](./Brewfile) is the single source of truth for base machine dependencies. It covers Homebrew tooling plus the core CLI and runtime stack used here, including Git and GitHub CLI, -Bun/Node/Python, Stow, 1Password CLI, Tailscale, ImageMagick, and Zsh. +Bun/Node/Python, Stow, the 1Password desktop app and CLI, Tailscale, ImageMagick, and Zsh. ### Dotpkgs @@ -59,6 +88,7 @@ Bun/Node/Python, Stow, 1Password CLI, Tailscale, ImageMagick, and Zsh. ### Skills - [`piro-skill-author`](./skills/skill-author/): creates, standardizes, and validates Pirobased repo-local skills. +- [`piro-me-readiness`](./skills/me-readiness/): verifies this `me` repo and macOS user profile are ready for Codex work as `pirog`. This plugin surface is intentionally small. Broader shared canon skills come from the paired `tanaab` plugin. diff --git a/dotfiles/ai/.codex/AGENTS.md b/dotfiles/ai/.codex/AGENTS.md index 178dfb8..d4e4941 100644 --- a/dotfiles/ai/.codex/AGENTS.md +++ b/dotfiles/ai/.codex/AGENTS.md @@ -36,7 +36,7 @@ ## Git Defaults - When asked to commit, prefer a commit subject that begins with a known GitHub issue or PR number, such as `#123: update config sync`. -- Do not add `[codex]` or similar generated-tool prefixes to commit messages unless explicitly requested. +- Do not add `[codex]` or similar generated-tool prefixes to commit messages or pull request titles unless explicitly requested. ## Validation Discipline @@ -49,3 +49,12 @@ - Read only the parts of a skill, reference, template, or doc that are needed for the current task. - Prefer shipped scripts, references, and templates over re-deriving large blocks of guidance from memory. - Keep the active context small and relevant to the work at hand. + +## monday Connector + +- For monday.com board, item, update, workspace, or CRM work, prefer the monday app connector over browser or desktop automation unless the user explicitly asks for browser/computer use. +- For this `me` environment, the monday connector is expected to post as Michael Pirog. +- Before mutating monday data in a new session, confirm the connector is exposed and run a read-only identity probe such as `list_users_and_teams(getMe=true)`. +- Treat monday user ID `71211606` and name `Michael Pirog` as the readiness check for this machine. Do not require an email because the connector may omit it. +- If the connector is missing, unauthenticated, or authenticated as any other monday user, stop and report the setup or identity mismatch before making monday changes. +- Treat monday app authorization as Codex-managed connector state. Do not store monday tokens, connector auth, app installation state, or MCP credentials in tracked repo config. diff --git a/skills/me-readiness/SKILL.md b/skills/me-readiness/SKILL.md new file mode 100644 index 0000000..2067037 --- /dev/null +++ b/skills/me-readiness/SKILL.md @@ -0,0 +1,185 @@ +--- +name: piro-me-readiness +description: Pirobased workflow to verify that a bootstrapped me machine is ready for Codex work as pirog. +license: MIT +metadata: + type: workflow + owner: pirog + tags: + - pirog + - workflow + - validation +--- + +# Me Readiness + +## Overview + +Use this skill only to verify that the current `me` checkout and macOS user profile are ready for +Codex work as `pirog`. It checks the repo-owned machine setup, manually configured app +readiness, Codex plugin links, and read-only Codex connector identities. It does not configure +tokens, run setup, manage environments, perform GitHub or monday work, or validate releases. + +## When to Use + +- Run after `boot.sh` and the README manual setup checklist have completed. +- Run before relying on Codex plugin skills, GitHub connector actions, monday connector actions, + 1Password-backed local setup, or Tailscale network access. +- Run when moving this `me` environment to a new interactive macOS user profile. + +## When Not to Use + +- Do not use this skill for Agentbox or robot-user readiness. +- Do not use this skill to configure GitHub tokens, write runtime env files, configure 1Password + shell plugins, or provision secrets. +- Do not use this skill to mutate GitHub or monday data, post readiness updates, run setup, run + release validation, or perform general machine administration. + +## Preconditions + +- Work from the `me` checkout at `/Users/pirog/tanaab/me`. +- The user should have completed the README manual setup checklist first. +- The GitHub and monday app connectors must be available in the active Codex session for connector + validation. + +## Workflow + +1. Run the bundled local probe with unsandboxed/elevated local access by default: + + ```sh + bun ./skills/me-readiness/scripts/check-machine.js + ``` + + This helper is read-only, strips 1Password token fallback environment variables from `op` + subprocesses, and intentionally verifies local desktop/daemon services such as 1Password and + Tailscale. The unsandboxed default is specific to this bundled helper and should not be treated + as permission to run unrelated repo commands unsandboxed. + + If a local permission prompt appears during this helper run, tell the user it is expected for + 1Password or Tailscale desktop readiness. Denying the prompt may make readiness fail. + + If unsandboxed access is denied or unavailable, run the helper sandboxed. If that sandboxed run + fails only on retryable local desktop or daemon access checks, report those checks as unresolved + local access failures and explain that an unsandboxed read-only run is needed for an authoritative + local readiness result. + +2. Parse the JSON output and summarize each `fail` and `warn` check with its `remediation` text. + The helper emits checks in dependency order and every check includes one stable `bucket`: + `homebrew`, `packages`, `dotfiles`, `manual_apps`, then `codex_plugins`. + + Bucket meanings: + - `homebrew`: Homebrew command availability. + - `packages`: Brewfile declarations and required command availability. + - `dotfiles`: repo-owned stowed files and generated local config readiness. + - `manual_apps`: installed apps and local app/auth/network readiness. + - `codex_plugins`: local Codex plugin links or plugin install surfaces owned by this repo. + + The bucket order is intentional: package manager availability comes before package contracts, + package contracts come before dotfile checks, dotfiles come before manual app readiness, and + app readiness comes before Codex plugin and connector identity. + +3. Discover the GitHub connector tools. If unavailable, report that the user should enable the + GitHub app in Codex and confirm the GitHub app connection before rerunning readiness. + +4. Run a read-only GitHub identity probe with the authenticated user login/profile tool. Require + both: + - GitHub login `pirog` + - GitHub user ID `713424` + +5. Discover the monday connector tools. If unavailable, report that the user should enable the + monday.com app in Codex and confirm the monday app connection before rerunning readiness. + +6. Run a read-only monday identity probe with `list_users_and_teams(getMe=true)`. Require both: + - monday user ID `71211606` + - monday user name `Michael Pirog` + +7. If connector identity fails, report that the user should reauthorize the relevant Codex app + connector as `Michael Pirog` and confirm the app is connected to the correct account. + +8. Close with a concise readiness summary: + - ready: no local failures and both connector identities matched + - ready with warnings: no local failures, warnings present, and both connector identities matched + - not ready: any local failure or connector identity mismatch + + Use this report format: put the final status first as `🟢 Ready`, `🟡 Ready with warnings`, + or `🔴 Not ready`, then include a status list for these surfaces. Use `✅` for passing + surfaces, `⚠️` for warning surfaces, and `❌` for failing surfaces: + + ```markdown + 🟢 **Ready** + + - ✅ Homebrew + - ✅ Packages and Brewfile contract + - ✅ Dotfiles + - ✅ 1Password app and CLI vault access + - ✅ 1Password Environment + - ✅ Tailscale tailnet + - ✅ Codex plugin links + - ✅ GitHub connector: `pirog` / `713424` + - ✅ monday connector: `Michael Pirog` / `71211606` + + Local access note: sandboxed helper could not reach local desktop services, but the unsandboxed read-only retry passed. + ``` + + If any surface fails or warns, mark it with `❌` or `⚠️` and list only the failed or warning + checks below the status list with their remediation text. Include `Local access note` only when + the sandboxed and unsandboxed helper results differ. + +## Checkpoints + +- Do not mutate GitHub or monday data during readiness. No issues, PRs, update posts, item edits, + or browser/computer automation fallback. +- Do not print tokens, secret values, raw environment contents, or raw command stderr that may + contain sensitive data. +- Do not add readiness checks for general environment values, token provisioning, GitHub/monday task + automation, setup mutation, release builds, or Leia. The only Environment value this skill may + verify is the hashed readiness authorization sentinel. +- Do not add a new helper check id without assigning it to one of the five allowed local buckets. +- Treat `op vault list --format json` as the local 1Password readiness gate because it proves the + app is unlocked and integrated enough for authenticated CLI access. +- Treat `op environment read --help` and `op run --environment zsstdfqknicwfv5glv76gd6tue` as the + local protected-resource fallback readiness gate. The helper must verify only the hashed + readiness authorization sentinel and must not print or commit the sentinel value. +- Treat `tailscale status --json` as the local Tailscale readiness gate. Require the local node to + be running, online, present in the network map, assigned a Tailscale IP, and connected to + `tanaab.dev`. Peer pings are troubleshooting tools, not readiness gates. +- Treat the README as human setup guidance. Use the helper JSON and connector probes as the + machine-readable source of readiness truth. +- Run only the bundled readiness helper unsandboxed by default. Do not generalize that default to + tests, setup commands, package managers, release validation, or other repo commands. +- If the sandboxed helper only fails on retryable local desktop or daemon access and the + unsandboxed read-only retry passes, base the final readiness status on the unsandboxed result. + Report the sandboxed failures as a local access note, not as readiness failures. +- Follow the root `AGENTS.md` readiness maintenance policy when deciding whether future repo or + skill changes should update this readiness skill. + +## Completion Criteria + +- The helper JSON was parsed successfully. +- Every local `fail` or `warn` was reported with a remediation step. +- Every helper check included a known bucket and bucket order matched the dependency order. +- 1Password Environment readiness either proved `op run --environment` access to the readiness + Environment or reported the setup mismatch without printing the authorization sentinel. +- The GitHub connector either matched login `pirog` and ID `713424` or the setup mismatch was + reported. +- The monday connector either matched `Michael Pirog` ID `71211606` or the setup mismatch was + reported. + +## Bundled Resources + +- [`scripts/check-machine.js`](./scripts/check-machine.js): local read-only machine readiness probe + CLI wrapper that emits deterministic JSON. +- [`scripts/check-machine-lib.js`](./scripts/check-machine-lib.js): tested local helper library used + by the CLI wrapper. + +## Validation + +- Confirm the local helper output is parseable JSON. +- Confirm every `warn` and `fail` local check includes remediation. +- Confirm every helper check includes a known bucket. +- Confirm local helper checks stay within the five allowed buckets and do not print or commit + Environment values. +- Confirm 1Password Environment validation strips token fallback env vars from `op` subprocesses. +- Confirm GitHub validation used a read-only authenticated identity probe and performed no + mutations. +- Confirm monday validation used `list_users_and_teams(getMe=true)` and performed no mutations. diff --git a/skills/me-readiness/agents/openai.yaml b/skills/me-readiness/agents/openai.yaml new file mode 100644 index 0000000..e5643d1 --- /dev/null +++ b/skills/me-readiness/agents/openai.yaml @@ -0,0 +1,7 @@ +interface: + display_name: 'Me Readiness' + short_description: 'Pirobased readiness check for this me machine.' + icon_small: './assets/icon-small.svg' + icon_large: './assets/icon-large.png' + brand_color: '#db2777' + default_prompt: 'Use $piro-me-readiness after bootstrapping this me environment to verify local setup, 1Password app and Environment readiness, Tailscale readiness, Codex plugin links, and GitHub/monday connector identity.' diff --git a/skills/me-readiness/assets/icon-large.png b/skills/me-readiness/assets/icon-large.png new file mode 100644 index 0000000..33fea7f Binary files /dev/null and b/skills/me-readiness/assets/icon-large.png differ diff --git a/skills/me-readiness/assets/icon-small.svg b/skills/me-readiness/assets/icon-small.svg new file mode 100644 index 0000000..c7a79bc --- /dev/null +++ b/skills/me-readiness/assets/icon-small.svg @@ -0,0 +1,66 @@ + + + + diff --git a/skills/me-readiness/scripts/check-machine-lib.js b/skills/me-readiness/scripts/check-machine-lib.js new file mode 100644 index 0000000..12d1c64 --- /dev/null +++ b/skills/me-readiness/scripts/check-machine-lib.js @@ -0,0 +1,674 @@ +import { execFile as execFileCallback } from 'node:child_process'; +import { + lstat as defaultLstat, + readFile as defaultReadFile, + stat as defaultStat, +} from 'node:fs/promises'; +import os from 'node:os'; +import path from 'node:path'; +import { promisify } from 'node:util'; +import { fileURLToPath } from 'node:url'; + +const execFileAsync = promisify(execFileCallback); + +const SCRIPT_DIR = path.dirname(fileURLToPath(import.meta.url)); +const DEFAULT_REPO_ROOT = path.resolve(SCRIPT_DIR, '..', '..', '..'); +const PRIVATE_CONFIG_MODE = 0o600; +const EXPECTED_TAILNET_NAME = 'tanaab.dev'; +export const EXPECTED_ONEPASSWORD_ENVIRONMENT_ID = 'zsstdfqknicwfv5glv76gd6tue'; +const MINIMUM_ONEPASSWORD_ENVIRONMENT_CLI_VERSION = '2.33.0-beta.02'; +const READINESS_AUTHORIZATION_CODE_KEY = 'READINESS_AUTHORIZATION_CODE'; +const EXPECTED_READINESS_AUTHORIZATION_CODE_SHA256 = + 'a924fd4b1d47841c36ae7663db374cf040b913ffa56541fe0f345435e3cce267'; +const ONEPASSWORD_ENVIRONMENT_VALIDATION_SCRIPT = `import { createHash } from "node:crypto";const value=process.env.${READINESS_AUTHORIZATION_CODE_KEY};const hash=value?createHash("sha256").update(value).digest("hex"):"";process.stdout.write(JSON.stringify({present:Boolean(value),matches:hash==="${EXPECTED_READINESS_AUTHORIZATION_CODE_SHA256}"}));`; + +export const REQUIRED_BREWFILE_CASKS = ['1password', '1password-cli@beta', 'tailscale']; +export const FORBIDDEN_BREWFILE_CASKS = [ + { + cask: '1password-cli', + id: 'brewfile_cask_1password_cli_stable_absent', + }, +]; +export const REQUIRED_COMMANDS = ['brew', 'bun', 'git', 'gh', 'op', 'stow', 'tailscale']; +export const ONEPASSWORD_TOKEN_ENV_KEYS = [ + 'PIROME_OP_TOKEN', + 'TANAAB_OP_TOKEN', + 'OP_SERVICE_ACCOUNT_TOKEN', + 'OP_CONNECT_TOKEN', + 'OP_SESSION', +]; +export const CHECK_BUCKET_ORDER = Object.freeze([ + 'homebrew', + 'packages', + 'dotfiles', + 'manual_apps', + 'codex_plugins', +]); + +const DOTFILE_LINKS = [ + { + id: 'codex_agents_link', + relativePath: ['.codex', 'AGENTS.md'], + label: '~/.codex/AGENTS.md', + }, + { + id: 'codex_shared_config_link', + relativePath: ['.codex', 'config.shared.toml'], + label: '~/.codex/config.shared.toml', + }, +]; + +const CODEX_PLUGIN_LINKS = [ + { + id: 'codex_piroplugin_link', + relativePath: ['.codex', 'plugins', 'piroplugin'], + label: '~/.codex/plugins/piroplugin', + }, + { + id: 'codex_tanaab_link', + relativePath: ['.codex', 'plugins', 'tanaab'], + label: '~/.codex/plugins/tanaab', + }, +]; + +function checkIdSegment(value) { + return value.replace(/[^a-z0-9]+/g, '_'); +} + +const CHECK_BUCKET_BY_ID = new Map([ + ...REQUIRED_COMMANDS.map((command) => [ + `command_${command}`, + command === 'brew' ? 'homebrew' : 'packages', + ]), + ['brewfile_readable', 'packages'], + ...REQUIRED_BREWFILE_CASKS.map((cask) => [`brewfile_cask_${checkIdSegment(cask)}`, 'packages']), + ...FORBIDDEN_BREWFILE_CASKS.map(({ id }) => [id, 'packages']), + ['codex_agents_link', 'dotfiles'], + ['codex_shared_config_link', 'dotfiles'], + ['codex_generated_config', 'dotfiles'], + ['onepassword_app', 'manual_apps'], + ['onepassword_cli_vault_access', 'manual_apps'], + ['onepassword_environment_cli', 'manual_apps'], + ['onepassword_environment_run', 'manual_apps'], + ['tailscale_app', 'manual_apps'], + ['tailscale_status', 'manual_apps'], + ['bootstrap_token_env', 'manual_apps'], + ['codex_piroplugin_link', 'codex_plugins'], + ['codex_tanaab_link', 'codex_plugins'], +]); +const CHECK_STATUSES = new Set(['pass', 'warn', 'fail']); + +function makeCheck({ id, message, remediation, status }) { + const bucket = CHECK_BUCKET_BY_ID.get(id); + + if (!bucket) { + throw new Error(`No readiness bucket assigned for check ${id}.`); + } + + if (!CHECK_STATUSES.has(status)) { + throw new Error(`Unsupported readiness status ${status}.`); + } + + const check = { bucket, id, status, message }; + + if (status !== 'pass') { + check.remediation = remediation; + } + + return check; +} + +function pass(id, message) { + return makeCheck({ id, status: 'pass', message }); +} + +function warn(id, message, remediation) { + return makeCheck({ id, status: 'warn', message, remediation }); +} + +function fail(id, message, remediation) { + return makeCheck({ id, status: 'fail', message, remediation }); +} + +function hasCask(brewfile, cask) { + return new RegExp( + `^\\s*cask\\s+["']${cask.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')}["']`, + 'm', + ).test(brewfile); +} + +function formatMode(mode) { + return `0${(mode & 0o777).toString(8)}`; +} + +async function defaultCommandExists(command) { + try { + await execFileAsync('which', [command], { timeout: 5000 }); + return true; + } catch { + return false; + } +} + +async function defaultExecFile(command, args, options = {}) { + const { stdout } = await execFileAsync(command, args, { + env: options.env ?? process.env, + maxBuffer: 1024 * 1024, + timeout: options.timeout ?? 10000, + }); + + return { stdout }; +} + +function formatErrorDetail(error) { + const stderr = typeof error?.stderr === 'string' ? error.stderr : ''; + const message = error instanceof Error ? error.message : String(error ?? ''); + return (stderr || message).replace(/\s+/g, ' ').trim(); +} + +function formatOnePasswordCommandError(error) { + const detail = formatErrorDetail(error); + + if (/couldn'?t connect to the 1Password desktop app/i.test(detail)) { + return { + message: '1Password CLI could not connect to the 1Password desktop app from this process.', + remediation: + 'If this was a sandboxed run and op vault list --format json works in your terminal, rerun the readiness helper with unsandboxed local access from Codex. Otherwise open 1Password, sign in, unlock it, and enable Developer > Integrate with 1Password CLI.', + }; + } + + return { + message: '1Password CLI vault access check failed.', + remediation: + 'Open 1Password, sign in, unlock it, enable Developer > Integrate with 1Password CLI, then rerun op vault list.', + }; +} + +function formatOnePasswordEnvironmentCommandError(error) { + const detail = formatErrorDetail(error); + + if (/couldn'?t connect to the 1Password desktop app/i.test(detail)) { + return { + message: + '1Password Environment access could not connect to the desktop app from this process.', + remediation: + 'If this was a sandboxed run and op run --environment works in your terminal, rerun the readiness helper with unsandboxed local access from Codex. Otherwise open 1Password, sign in, unlock it, enable Developer > Integrate with 1Password CLI, and enable Developer > Show 1Password Developer experience.', + }; + } + + return { + message: '1Password Environment access check failed.', + remediation: + 'Open 1Password, sign in, unlock it, enable Developer > Integrate with 1Password CLI, enable Developer > Show 1Password Developer experience, confirm the readiness Environment is accessible, then rerun the readiness helper.', + }; +} + +function formatTailscaleCommandError(error) { + const detail = formatErrorDetail(error); + + if (/failed to connect to local Tailscaled|local tailscaled|tailscaled process/i.test(detail)) { + return { + message: 'Tailscale CLI could not connect to the local Tailscale service from this process.', + remediation: + 'If this was a sandboxed run and tailscale status --json works in your terminal, rerun the readiness helper with unsandboxed local access from Codex. Otherwise open Tailscale, sign in, and connect this machine to the tanaab.dev tailnet.', + }; + } + + return { + message: 'Tailscale status check failed.', + remediation: + 'Open Tailscale, sign in, connect this machine to the tanaab.dev tailnet, then rerun tailscale status --json.', + }; +} + +async function pathInfo(targetPath, deps) { + try { + return await deps.lstat(targetPath); + } catch { + return null; + } +} + +function onePasswordTokenEnvKeys(env) { + return Object.keys(env).filter( + (key) => ONEPASSWORD_TOKEN_ENV_KEYS.includes(key) || key.startsWith('OP_SESSION_'), + ); +} + +function commandEnvWithoutOnePasswordTokenFallbacks(env) { + const commandEnv = { ...env }; + + for (const key of onePasswordTokenEnvKeys(env)) { + delete commandEnv[key]; + } + + return commandEnv; +} + +async function commandCheck(command, deps) { + return (await deps.commandExists(command)) + ? pass(`command_${command}`, `Command "${command}" is available.`) + : fail( + `command_${command}`, + `Command "${command}" is not available on PATH.`, + 'Rerun https://boot.pirog.me/boot.sh or install the missing Brewfile dependency.', + ); +} + +function getCheck(checks, id) { + return checks.find((check) => check.id === id); +} + +function checkOnePasswordEnvironmentRun(result) { + if (!result || typeof result !== 'object') { + return fail( + 'onepassword_environment_run', + '1Password Environment readiness output was not a JSON object.', + 'Rerun the readiness helper after confirming the readiness Environment is accessible through 1Password Developer.', + ); + } + + if (result.present !== true) { + return fail( + 'onepassword_environment_run', + `${READINESS_AUTHORIZATION_CODE_KEY} was not provided by the 1Password Environment.`, + 'Open 1Password, enable Developer > Show 1Password Developer experience, and confirm the readiness Environment includes the authorization sentinel.', + ); + } + + if (result.matches !== true) { + return fail( + 'onepassword_environment_run', + `${READINESS_AUTHORIZATION_CODE_KEY} did not match the expected readiness sentinel.`, + 'Update the readiness Environment authorization sentinel in 1Password, then rerun the readiness helper.', + ); + } + + return pass( + 'onepassword_environment_run', + '1Password Environment provided the expected readiness authorization sentinel.', + ); +} + +function checkTailscaleStatus(status) { + if (!status || typeof status !== 'object') { + return fail( + 'tailscale_status', + 'Tailscale status output was not a JSON object.', + 'Open Tailscale, sign in, connect this machine to the tanaab.dev tailnet, then rerun tailscale status --json.', + ); + } + + const issues = []; + const tailscaleIps = Array.isArray(status.TailscaleIPs) ? status.TailscaleIPs : []; + const tailnetName = status.CurrentTailnet?.Name; + + if (status.BackendState !== 'Running') { + issues.push(`BackendState is "${String(status.BackendState ?? 'missing')}"`); + } + + if (status.Self?.Online !== true) { + issues.push('local node is not online'); + } + + if (status.Self?.InNetworkMap !== true) { + issues.push('local node is not in the network map'); + } + + if (tailscaleIps.length === 0) { + issues.push('no Tailscale IPs are assigned'); + } + + if (tailnetName !== EXPECTED_TAILNET_NAME) { + issues.push(`CurrentTailnet.Name is "${String(tailnetName ?? 'missing')}"`); + } + + return issues.length === 0 + ? pass('tailscale_status', `Tailscale is running on the ${EXPECTED_TAILNET_NAME} tailnet.`) + : fail( + 'tailscale_status', + `Tailscale is not ready: ${issues.join('; ')}.`, + 'Open Tailscale, sign in, connect this machine to the tanaab.dev tailnet, then rerun tailscale status --json.', + ); +} + +async function appendStowedLinkChecks(checks, links, homeDir, deps) { + for (const link of links) { + const targetPath = path.join(homeDir, ...link.relativePath); + const info = await pathInfo(targetPath, deps); + + if (!info) { + checks.push( + fail( + link.id, + `${link.label} is missing.`, + 'Run bun run ai:sync from /Users/pirog/tanaab/me to restow the Codex dotfiles.', + ), + ); + continue; + } + + checks.push( + info.isSymbolicLink() + ? pass(link.id, `${link.label} exists as a stowed link.`) + : warn( + link.id, + `${link.label} exists but is not a symbolic link.`, + 'Run bun run ai:sync from /Users/pirog/tanaab/me to restow the Codex dotfiles.', + ), + ); + } +} + +async function appendGeneratedConfigCheck(checks, homeDir, deps) { + const generatedConfigPath = path.join(homeDir, '.codex', 'config.toml'); + + try { + const configStat = await deps.stat(generatedConfigPath); + const mode = configStat.mode & 0o777; + + checks.push( + mode === PRIVATE_CONFIG_MODE + ? pass('codex_generated_config', '~/.codex/config.toml exists with private permissions.') + : warn( + 'codex_generated_config', + `~/.codex/config.toml exists with mode ${formatMode(configStat.mode)}, expected 0600.`, + 'Run bun run ai:sync from /Users/pirog/tanaab/me to regenerate Codex config with private permissions.', + ), + ); + } catch { + checks.push( + fail( + 'codex_generated_config', + '~/.codex/config.toml is missing.', + 'Run bun run ai:sync from /Users/pirog/tanaab/me to generate Codex config.', + ), + ); + } +} + +async function appendBrewfileChecks(checks, repoRoot, deps) { + const brewfilePath = path.join(repoRoot, 'Brewfile'); + let brewfile = ''; + + try { + brewfile = await deps.readFile(brewfilePath, 'utf8'); + } catch { + checks.push( + fail( + 'brewfile_readable', + `Brewfile was not readable at ${brewfilePath}.`, + 'Run this probe from the me checkout or rerun https://boot.pirog.me/boot.sh to materialize the repo.', + ), + ); + } + + if (!brewfile) { + return; + } + + checks.push(pass('brewfile_readable', 'Brewfile is readable.')); + + for (const cask of REQUIRED_BREWFILE_CASKS) { + checks.push( + hasCask(brewfile, cask) + ? pass(`brewfile_cask_${checkIdSegment(cask)}`, `Brewfile includes cask "${cask}".`) + : fail( + `brewfile_cask_${checkIdSegment(cask)}`, + `Brewfile does not include cask "${cask}".`, + 'Update the Brewfile and rerun https://boot.pirog.me/boot.sh or install the missing Brewfile dependency.', + ), + ); + } + + for (const { cask, id } of FORBIDDEN_BREWFILE_CASKS) { + checks.push( + hasCask(brewfile, cask) + ? fail( + id, + `Brewfile still includes conflicting cask "${cask}".`, + 'Replace cask "1password-cli" with cask "1password-cli@beta" so op supports 1Password Environments.', + ) + : pass(id, `Brewfile does not include conflicting cask "${cask}".`), + ); + } +} + +async function appendRequiredCommandChecks(checks, deps) { + for (const command of REQUIRED_COMMANDS.filter((requiredCommand) => requiredCommand !== 'brew')) { + checks.push(await commandCheck(command, deps)); + } +} + +async function appendAppPresenceChecks(checks, deps) { + const onePasswordAppPath = '/Applications/1Password.app'; + checks.push( + (await pathInfo(onePasswordAppPath, deps)) + ? pass('onepassword_app', '1Password.app was found.') + : fail( + 'onepassword_app', + '1Password.app was not found.', + 'Rerun https://boot.pirog.me/boot.sh or install the 1Password desktop app, then open it and sign in.', + ), + ); + + const tailscaleAppPath = '/Applications/Tailscale.app'; + checks.push( + (await pathInfo(tailscaleAppPath, deps)) + ? pass('tailscale_app', 'Tailscale.app was found.') + : fail( + 'tailscale_app', + 'Tailscale.app was not found.', + 'Rerun https://boot.pirog.me/boot.sh or install the Tailscale desktop app, then open it and sign in.', + ), + ); +} + +async function appendOnePasswordVaultAccessCheck(checks, env, deps) { + if (getCheck(checks, 'command_op')?.status === 'fail') { + checks.push( + fail( + 'onepassword_cli_vault_access', + '1Password CLI vault access could not be checked because op is missing.', + 'Install 1Password CLI, open 1Password, sign in, unlock it, and enable Developer > Integrate with 1Password CLI.', + ), + ); + return; + } + + try { + const { stdout } = await deps.execFile('op', ['vault', 'list', '--format', 'json'], { + env: commandEnvWithoutOnePasswordTokenFallbacks(env), + }); + const vaults = JSON.parse(stdout); + checks.push( + Array.isArray(vaults) && vaults.length > 0 + ? pass('onepassword_cli_vault_access', `1Password CLI can list ${vaults.length} vault(s).`) + : fail( + 'onepassword_cli_vault_access', + '1Password CLI cannot list vaults for a signed-in account.', + 'Open 1Password, sign in, unlock it, enable Developer > Integrate with 1Password CLI, then rerun op vault list.', + ), + ); + } catch (error) { + const formattedError = formatOnePasswordCommandError(error); + checks.push( + fail('onepassword_cli_vault_access', formattedError.message, formattedError.remediation), + ); + } +} + +async function appendOnePasswordEnvironmentChecks(checks, env, deps) { + let environmentCliSupported = false; + + if (getCheck(checks, 'command_op')?.status === 'fail') { + checks.push( + fail( + 'onepassword_environment_cli', + '1Password Environment CLI support could not be checked because op is missing.', + 'Install the 1Password CLI beta cask from the Brewfile, then rerun op environment read --help.', + ), + ); + } else { + try { + await deps.execFile('op', ['environment', 'read', '--help'], { + env: commandEnvWithoutOnePasswordTokenFallbacks(env), + }); + environmentCliSupported = true; + checks.push( + pass( + 'onepassword_environment_cli', + '1Password CLI supports reading values from 1Password Environments.', + ), + ); + } catch { + checks.push( + fail( + 'onepassword_environment_cli', + '1Password Environment CLI support check failed.', + `Install or update to 1Password CLI beta ${MINIMUM_ONEPASSWORD_ENVIRONMENT_CLI_VERSION} or newer through cask "1password-cli@beta".`, + ), + ); + } + } + + if (!environmentCliSupported) { + checks.push( + fail( + 'onepassword_environment_run', + '1Password Environment readiness could not be checked because Environment CLI support is missing.', + `Install or update to 1Password CLI beta ${MINIMUM_ONEPASSWORD_ENVIRONMENT_CLI_VERSION} or newer through cask "1password-cli@beta", then rerun the readiness helper.`, + ), + ); + return; + } + + try { + const { stdout } = await deps.execFile( + 'op', + [ + 'run', + '--environment', + EXPECTED_ONEPASSWORD_ENVIRONMENT_ID, + '--', + 'bun', + '-e', + ONEPASSWORD_ENVIRONMENT_VALIDATION_SCRIPT, + ], + { + env: commandEnvWithoutOnePasswordTokenFallbacks(env), + }, + ); + checks.push(checkOnePasswordEnvironmentRun(JSON.parse(stdout))); + } catch (error) { + if (error instanceof SyntaxError) { + checks.push( + fail( + 'onepassword_environment_run', + '1Password Environment readiness output was not parseable JSON.', + 'Rerun the readiness helper after confirming the readiness Environment is accessible through 1Password Developer.', + ), + ); + return; + } + + const formattedError = formatOnePasswordEnvironmentCommandError(error); + checks.push( + fail('onepassword_environment_run', formattedError.message, formattedError.remediation), + ); + } +} + +async function appendOnePasswordChecks(checks, env, deps) { + await appendOnePasswordVaultAccessCheck(checks, env, deps); + await appendOnePasswordEnvironmentChecks(checks, env, deps); +} + +async function appendTailscaleStatusCheck(checks, deps) { + if (getCheck(checks, 'command_tailscale')?.status === 'fail') { + checks.push( + fail( + 'tailscale_status', + 'Tailscale status could not be checked because tailscale is missing.', + 'Install the Tailscale desktop app, sign in, connect this machine to the tanaab.dev tailnet, then rerun tailscale status --json.', + ), + ); + return; + } + + try { + const { stdout } = await deps.execFile('tailscale', ['status', '--json']); + checks.push(checkTailscaleStatus(JSON.parse(stdout))); + } catch (error) { + if (error instanceof SyntaxError) { + checks.push( + fail( + 'tailscale_status', + 'Tailscale status output was not parseable JSON.', + 'Open Tailscale, sign in, connect this machine to the tanaab.dev tailnet, then rerun tailscale status --json.', + ), + ); + return; + } + + const formattedError = formatTailscaleCommandError(error); + checks.push(fail('tailscale_status', formattedError.message, formattedError.remediation)); + } +} + +function appendTokenFallbackCheck(checks, env) { + const presentTokenKeys = onePasswordTokenEnvKeys(env); + checks.push( + presentTokenKeys.length === 0 + ? pass( + 'bootstrap_token_env', + 'No 1Password token fallback environment variables are present.', + ) + : warn( + 'bootstrap_token_env', + `1Password token fallback environment variable(s) are still set: ${presentTokenKeys.join(', ')}.`, + 'Unset 1Password token fallback environment variables so readiness proves desktop app and Environment access without persistent token material.', + ), + ); +} + +/** + * Runs the read-only local readiness checks and returns the stable helper report. + * + * @param {object} [options] Runtime overrides and test seams for filesystem, command, and env access. + * @returns {Promise<{ok: boolean, checks: Array}>} Readiness report shaped as `{ ok, checks }`. + */ +export async function checkMachine(options = {}) { + const deps = { + commandExists: defaultCommandExists, + execFile: defaultExecFile, + lstat: defaultLstat, + readFile: defaultReadFile, + stat: defaultStat, + ...(options.deps ?? {}), + }; + const env = options.env ?? process.env; + const homeDir = options.homeDir ?? os.homedir(); + const repoRoot = options.repoRoot ?? DEFAULT_REPO_ROOT; + const checks = []; + + checks.push(await commandCheck('brew', deps)); + await appendBrewfileChecks(checks, repoRoot, deps); + await appendRequiredCommandChecks(checks, deps); + await appendStowedLinkChecks(checks, DOTFILE_LINKS, homeDir, deps); + await appendGeneratedConfigCheck(checks, homeDir, deps); + await appendAppPresenceChecks(checks, deps); + await appendOnePasswordChecks(checks, env, deps); + await appendTailscaleStatusCheck(checks, deps); + appendTokenFallbackCheck(checks, env); + await appendStowedLinkChecks(checks, CODEX_PLUGIN_LINKS, homeDir, deps); + + return { + ok: !checks.some((check) => check.status === 'fail'), + checks, + }; +} + +export function formatReport(report) { + return `${JSON.stringify(report, null, 2)}\n`; +} diff --git a/skills/me-readiness/scripts/check-machine.js b/skills/me-readiness/scripts/check-machine.js new file mode 100755 index 0000000..8f44a4e --- /dev/null +++ b/skills/me-readiness/scripts/check-machine.js @@ -0,0 +1,11 @@ +#!/usr/bin/env bun + +import { fileURLToPath } from 'node:url'; + +import { checkMachine, formatReport } from './check-machine-lib.js'; + +if (process.argv[1] === fileURLToPath(import.meta.url)) { + const report = await checkMachine(); + process.stdout.write(formatReport(report)); + process.exitCode = report.ok ? 0 : 1; +} diff --git a/test/me-readiness-check-machine.spec.js b/test/me-readiness-check-machine.spec.js new file mode 100644 index 0000000..17d86cb --- /dev/null +++ b/test/me-readiness-check-machine.spec.js @@ -0,0 +1,550 @@ +import assert from 'node:assert/strict'; +import path from 'node:path'; + +import { + CHECK_BUCKET_ORDER, + EXPECTED_ONEPASSWORD_ENVIRONMENT_ID, + ONEPASSWORD_TOKEN_ENV_KEYS, + REQUIRED_COMMANDS, + checkMachine, + formatReport, +} from '../skills/me-readiness/scripts/check-machine-lib.js'; + +const HOME_DIR = '/Users/tester'; +const REPO_ROOT = '/repo/me'; + +function makePath(...segments) { + return path.join(HOME_DIR, ...segments); +} + +function makeFileInfo({ symbolicLink = false } = {}) { + return { + isSymbolicLink() { + return symbolicLink; + }, + }; +} + +function makeHealthyTailscaleStatus(overrides = {}) { + return { + BackendState: 'Running', + TailscaleIPs: ['100.64.0.1'], + CurrentTailnet: { + Name: 'tanaab.dev', + }, + Self: { + InNetworkMap: true, + Online: true, + }, + ...overrides, + }; +} + +function healthyExistingPaths(...missingPaths) { + const missing = new Set(missingPaths); + + return [ + '/Applications/1Password.app', + '/Applications/Tailscale.app', + makePath('.codex', 'AGENTS.md'), + makePath('.codex', 'config.shared.toml'), + makePath('.codex', 'plugins', 'piroplugin'), + makePath('.codex', 'plugins', 'tanaab'), + makePath('.codex', 'config.toml'), + ].filter((targetPath) => !missing.has(targetPath)); +} + +function makeDeps({ + brewfile = ['cask "1password"', 'cask "1password-cli@beta"', 'cask "tailscale"'].join('\n'), + commands = REQUIRED_COMMANDS, + configMode = 0o100600, + environmentCliHelp = true, + environmentExecError = false, + environmentStdout, + environmentValues = { + matches: true, + present: true, + }, + execCalls, + existingPaths, + opExecError = false, + opEnvironmentHelpError = false, + symbolicLinks, + tailscaleExecError = false, + tailscaleStatus = makeHealthyTailscaleStatus(), + tailscaleStdout, + vaults = [{ id: 'vault' }], +} = {}) { + const existing = new Set(existingPaths ?? healthyExistingPaths()); + const symlinks = new Set( + symbolicLinks ?? [ + makePath('.codex', 'AGENTS.md'), + makePath('.codex', 'config.shared.toml'), + makePath('.codex', 'plugins', 'piroplugin'), + makePath('.codex', 'plugins', 'tanaab'), + ], + ); + const commandSet = new Set(commands); + + return { + commandExists(command) { + return commandSet.has(command); + }, + execFile(command, args, options = {}) { + execCalls?.push({ args, command, options }); + + if (command === 'op') { + if (args[0] === 'vault') { + assert.deepEqual(args, ['vault', 'list', '--format', 'json']); + + if (opExecError) { + throw opExecError instanceof Error ? opExecError : new Error('op failed'); + } + + return { stdout: JSON.stringify(vaults) }; + } + + if (args[0] === 'environment') { + assert.deepEqual(args, ['environment', 'read', '--help']); + + if (opEnvironmentHelpError || !environmentCliHelp) { + throw opEnvironmentHelpError instanceof Error + ? opEnvironmentHelpError + : new Error('unknown command "environment"'); + } + + return { + stdout: 'Read environment variables from a 1Password Environment.\n', + }; + } + + if (args[0] === 'run' && args[1] === '--environment') { + assert.equal(args[2], EXPECTED_ONEPASSWORD_ENVIRONMENT_ID); + assert.equal(args[3], '--'); + assert.equal(args[4], 'bun'); + assert.equal(args[5], '-e'); + assert.equal(typeof args[6], 'string'); + + if (environmentExecError) { + throw environmentExecError instanceof Error + ? environmentExecError + : new Error('op environment failed'); + } + + return { stdout: environmentStdout ?? JSON.stringify(environmentValues) }; + } + + throw new Error(`unexpected op args ${args.join(' ')}`); + } + + if (command === 'tailscale') { + assert.deepEqual(args, ['status', '--json']); + + if (tailscaleExecError) { + throw tailscaleExecError instanceof Error + ? tailscaleExecError + : new Error('tailscale failed'); + } + + return { stdout: tailscaleStdout ?? JSON.stringify(tailscaleStatus) }; + } + + throw new Error(`unexpected command ${command}`); + }, + lstat(targetPath) { + if (!existing.has(targetPath)) { + throw new Error(`missing ${targetPath}`); + } + + return makeFileInfo({ symbolicLink: symlinks.has(targetPath) }); + }, + readFile(targetPath) { + if (targetPath === path.join(REPO_ROOT, 'Brewfile')) { + return brewfile; + } + + throw new Error(`unexpected readFile ${targetPath}`); + }, + stat(targetPath) { + assert.equal(targetPath, makePath('.codex', 'config.toml')); + + if (!existing.has(targetPath)) { + throw new Error(`missing ${targetPath}`); + } + + return { mode: configMode }; + }, + }; +} + +async function runCheck(options = {}) { + return checkMachine({ + env: options.env ?? {}, + homeDir: HOME_DIR, + repoRoot: REPO_ROOT, + deps: makeDeps(options), + }); +} + +describe('skills/me-readiness/scripts/check-machine-lib', () => { + it('should report readiness when every local check passes', async () => { + const report = await runCheck({ + existingPaths: healthyExistingPaths(), + }); + + assert.equal(report.ok, true); + assert.ok(report.checks.length > 0); + assert.deepEqual([...new Set(report.checks.map((check) => check.status))], ['pass']); + }); + + it('should emit the Homebrew command check first', async () => { + const report = await runCheck({ + existingPaths: healthyExistingPaths(), + }); + + assert.equal(report.checks[0].id, 'command_brew'); + assert.equal(report.checks[0].bucket, 'homebrew'); + }); + + it('should assign stable buckets in readiness order', async () => { + const report = await runCheck({ + existingPaths: healthyExistingPaths(), + }); + const allowedBuckets = new Set(CHECK_BUCKET_ORDER); + const bucketIndexes = report.checks.map((check) => { + assert.ok(allowedBuckets.has(check.bucket), check.id); + return CHECK_BUCKET_ORDER.indexOf(check.bucket); + }); + + assert.deepEqual([...new Set(report.checks.map((check) => check.bucket))], CHECK_BUCKET_ORDER); + + for (let index = 1; index < bucketIndexes.length; index += 1) { + assert.ok( + bucketIndexes[index] >= bucketIndexes[index - 1], + `${report.checks[index - 1].id} should not run after ${report.checks[index].id}`, + ); + } + }); + + it('should include remediation for every warning and failure', async () => { + const report = await runCheck({ + brewfile: 'cask "1password-cli"\n', + commands: REQUIRED_COMMANDS.filter((command) => !['gh', 'tailscale'].includes(command)), + configMode: 0o100644, + env: { + PIROME_OP_TOKEN: 'super-secret-token', + }, + existingPaths: [ + makePath('.codex', 'AGENTS.md'), + makePath('.codex', 'config.shared.toml'), + makePath('.codex', 'plugins', 'piroplugin'), + makePath('.codex', 'plugins', 'tanaab'), + makePath('.codex', 'config.toml'), + ], + symbolicLinks: [ + makePath('.codex', 'config.shared.toml'), + makePath('.codex', 'plugins', 'piroplugin'), + makePath('.codex', 'plugins', 'tanaab'), + ], + vaults: [], + }); + + assert.equal(report.ok, false); + + for (const check of report.checks) { + if (check.status !== 'pass') { + assert.equal(typeof check.remediation, 'string', check.id); + assert.notEqual(check.remediation.trim(), '', check.id); + } + } + + assert.ok(report.checks.some((check) => check.id === 'brewfile_cask_1password')); + assert.ok(report.checks.some((check) => check.id === 'brewfile_cask_1password_cli_beta')); + assert.ok( + report.checks.some((check) => check.id === 'brewfile_cask_1password_cli_stable_absent'), + ); + assert.ok(report.checks.some((check) => check.id === 'brewfile_cask_tailscale')); + assert.ok(report.checks.some((check) => check.id === 'command_gh')); + assert.ok(report.checks.some((check) => check.id === 'command_tailscale')); + assert.ok(report.checks.some((check) => check.id === 'onepassword_app')); + assert.ok(report.checks.some((check) => check.id === 'onepassword_environment_cli')); + assert.ok(report.checks.some((check) => check.id === 'onepassword_environment_run')); + assert.ok(report.checks.some((check) => check.id === 'tailscale_app')); + assert.ok(report.checks.some((check) => check.id === 'tailscale_status')); + assert.ok(report.checks.some((check) => check.id === 'bootstrap_token_env')); + }); + + it('should not leak token fallback values in formatted JSON', async () => { + const report = await runCheck({ + env: { + OP_SERVICE_ACCOUNT_TOKEN: 'do-not-print-this-token', + }, + existingPaths: healthyExistingPaths(), + }); + + const output = formatReport(report); + + assert.doesNotMatch(output, /do-not-print-this-token/); + assert.match(output, /OP_SERVICE_ACCOUNT_TOKEN/); + }); + + it('should call 1Password vault list without token fallback environment variables', async () => { + const execCalls = []; + + await runCheck({ + env: { + KEEP_ME: 'yes', + OP_CONNECT_TOKEN: 'do-not-pass', + OP_SERVICE_ACCOUNT_TOKEN: 'do-not-pass', + OP_SESSION: 'do-not-pass', + OP_SESSION_tanaab: 'do-not-pass', + PIROME_OP_TOKEN: 'do-not-pass', + TANAAB_OP_TOKEN: 'do-not-pass', + }, + execCalls, + existingPaths: healthyExistingPaths(), + }); + + const opCall = execCalls.find((call) => call.command === 'op'); + + assert.deepEqual(opCall.args, ['vault', 'list', '--format', 'json']); + assert.equal(opCall.options.env.KEEP_ME, 'yes'); + + for (const key of [...ONEPASSWORD_TOKEN_ENV_KEYS, 'OP_SESSION_tanaab']) { + assert.equal(Object.hasOwn(opCall.options.env, key), false, key); + } + }); + + it('should require the beta 1Password CLI cask and reject the stable cask', async () => { + const report = await runCheck({ + brewfile: ['cask "1password"', 'cask "1password-cli"', 'cask "tailscale"'].join('\n'), + existingPaths: healthyExistingPaths(), + }); + const betaCheck = report.checks.find( + (check) => check.id === 'brewfile_cask_1password_cli_beta', + ); + const stableCheck = report.checks.find( + (check) => check.id === 'brewfile_cask_1password_cli_stable_absent', + ); + + assert.equal(betaCheck.status, 'fail'); + assert.equal(stableCheck.status, 'fail'); + assert.match(stableCheck.remediation, /1password-cli@beta/); + }); + + it('should fail 1Password Environment readiness when the beta CLI surface is missing', async () => { + const report = await runCheck({ + environmentCliHelp: false, + existingPaths: healthyExistingPaths(), + }); + const cliCheck = report.checks.find((check) => check.id === 'onepassword_environment_cli'); + const runCheckResult = report.checks.find( + (check) => check.id === 'onepassword_environment_run', + ); + + assert.equal(cliCheck.status, 'fail'); + assert.match(cliCheck.remediation, /1password-cli@beta/); + assert.equal(runCheckResult.status, 'fail'); + assert.match(runCheckResult.message, /Environment CLI support is missing/); + }); + + it('should call op run environment without printing the authorization code', async () => { + const execCalls = []; + + await runCheck({ + env: { + KEEP_ME: 'yes', + OP_CONNECT_TOKEN: 'do-not-pass', + OP_SERVICE_ACCOUNT_TOKEN: 'do-not-pass', + OP_SESSION: 'do-not-pass', + OP_SESSION_tanaab: 'do-not-pass', + PIROME_OP_TOKEN: 'do-not-pass', + TANAAB_OP_TOKEN: 'do-not-pass', + }, + execCalls, + existingPaths: healthyExistingPaths(), + }); + + const environmentCall = execCalls.find( + (call) => call.command === 'op' && call.args[0] === 'run' && call.args[1] === '--environment', + ); + + assert.deepEqual(environmentCall.args.slice(0, 6), [ + 'run', + '--environment', + EXPECTED_ONEPASSWORD_ENVIRONMENT_ID, + '--', + 'bun', + '-e', + ]); + assert.doesNotMatch(environmentCall.args[6], /console\.log|printenv/); + assert.doesNotMatch(environmentCall.args[6], /READINESS_AUTHORIZATION_CODE=/); + assert.match( + environmentCall.args[6], + /a924fd4b1d47841c36ae7663db374cf040b913ffa56541fe0f345435e3cce267/, + ); + assert.equal(environmentCall.options.env.KEEP_ME, 'yes'); + + for (const key of [...ONEPASSWORD_TOKEN_ENV_KEYS, 'OP_SESSION_tanaab']) { + assert.equal(Object.hasOwn(environmentCall.options.env, key), false, key); + } + }); + + it('should fail when the 1Password Environment authorization code is missing or wrong', async () => { + const missingReport = await runCheck({ + environmentValues: { + matches: false, + present: false, + }, + existingPaths: healthyExistingPaths(), + }); + const wrongReport = await runCheck({ + environmentValues: { + matches: false, + present: true, + }, + existingPaths: healthyExistingPaths(), + }); + const missingCheck = missingReport.checks.find( + (check) => check.id === 'onepassword_environment_run', + ); + const wrongCheck = wrongReport.checks.find( + (check) => check.id === 'onepassword_environment_run', + ); + const output = `${formatReport(missingReport)}${formatReport(wrongReport)}`; + + assert.equal(missingCheck.status, 'fail'); + assert.match(missingCheck.message, /was not provided/); + assert.equal(wrongCheck.status, 'fail'); + assert.match(wrongCheck.message, /did not match/); + assert.doesNotMatch(output, /a924fd4b1d47841c36ae7663db374cf040b913ffa56541fe0f345435e3cce267/); + }); + + it('should identify 1Password Environment desktop app connection failures as local access issues', async () => { + const report = await runCheck({ + environmentExecError: Object.assign(new Error('op environment failed'), { + stderr: + "1Password CLI couldn't connect to the 1Password desktop app. To fix this, update the 1Password app.", + }), + existingPaths: healthyExistingPaths(), + }); + const environmentRunCheck = report.checks.find( + (check) => check.id === 'onepassword_environment_run', + ); + + assert.equal(environmentRunCheck.status, 'fail'); + assert.match(environmentRunCheck.message, /could not connect/); + assert.match(environmentRunCheck.remediation, /unsandboxed local access/); + }); + + it('should emit parseable JSON with only supported statuses', async () => { + const report = await runCheck({ + opExecError: true, + existingPaths: healthyExistingPaths(), + }); + + const parsed = JSON.parse(formatReport(report)); + const statuses = new Set(parsed.checks.map((check) => check.status)); + + assert.deepEqual([...statuses].sort(), ['fail', 'pass']); + }); + + it('should identify 1Password desktop app connection failures as local access issues', async () => { + const report = await runCheck({ + opExecError: Object.assign(new Error('op failed'), { + stderr: + "1Password CLI couldn't connect to the 1Password desktop app. To fix this, update the 1Password app.", + }), + existingPaths: healthyExistingPaths(), + }); + const accountCheck = report.checks.find((check) => check.id === 'onepassword_cli_vault_access'); + + assert.equal(accountCheck.status, 'fail'); + assert.match(accountCheck.message, /could not connect/); + assert.match(accountCheck.remediation, /unsandboxed local access/); + }); + + it('should fail when the Tailscale app is missing', async () => { + const report = await runCheck({ + existingPaths: healthyExistingPaths('/Applications/Tailscale.app'), + }); + const tailscaleAppCheck = report.checks.find((check) => check.id === 'tailscale_app'); + + assert.equal(report.ok, false); + assert.equal(tailscaleAppCheck.status, 'fail'); + }); + + it('should fail Tailscale status when the command is missing', async () => { + const report = await runCheck({ + commands: REQUIRED_COMMANDS.filter((command) => command !== 'tailscale'), + existingPaths: healthyExistingPaths(), + }); + const commandCheck = report.checks.find((check) => check.id === 'command_tailscale'); + const statusCheck = report.checks.find((check) => check.id === 'tailscale_status'); + + assert.equal(commandCheck.status, 'fail'); + assert.equal(statusCheck.status, 'fail'); + assert.match(statusCheck.message, /tailscale is missing/); + }); + + it('should fail Tailscale status when connected to the wrong tailnet', async () => { + const report = await runCheck({ + existingPaths: healthyExistingPaths(), + tailscaleStatus: makeHealthyTailscaleStatus({ + CurrentTailnet: { + Name: 'other.example', + }, + }), + }); + const statusCheck = report.checks.find((check) => check.id === 'tailscale_status'); + + assert.equal(statusCheck.status, 'fail'); + assert.match(statusCheck.message, /other\.example/); + }); + + it('should fail Tailscale status when the local node is offline or not running', async () => { + const report = await runCheck({ + existingPaths: healthyExistingPaths(), + tailscaleStatus: makeHealthyTailscaleStatus({ + BackendState: 'Stopped', + Self: { + InNetworkMap: false, + Online: false, + }, + TailscaleIPs: [], + }), + }); + const statusCheck = report.checks.find((check) => check.id === 'tailscale_status'); + + assert.equal(statusCheck.status, 'fail'); + assert.match(statusCheck.message, /BackendState/); + assert.match(statusCheck.message, /not online/); + assert.match(statusCheck.message, /no Tailscale IPs/); + }); + + it('should fail Tailscale status when JSON output is invalid', async () => { + const report = await runCheck({ + existingPaths: healthyExistingPaths(), + tailscaleStdout: '{not json', + }); + const statusCheck = report.checks.find((check) => check.id === 'tailscale_status'); + + assert.equal(statusCheck.status, 'fail'); + assert.match(statusCheck.message, /not parseable JSON/); + }); + + it('should identify Tailscale daemon connection failures as local access issues', async () => { + const report = await runCheck({ + existingPaths: healthyExistingPaths(), + tailscaleExecError: Object.assign(new Error('tailscale failed'), { + stderr: + 'failed to connect to local Tailscaled process and failed to enumerate processes while looking for it', + }), + }); + const statusCheck = report.checks.find((check) => check.id === 'tailscale_status'); + + assert.equal(statusCheck.status, 'fail'); + assert.match(statusCheck.message, /local Tailscale service/); + assert.match(statusCheck.remediation, /unsandboxed local access/); + }); +});