You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
User adds a pipelines array to night-watch.config.json
Runs nw pipeline list to confirm detection
Runs nw pipeline run full-cycle — terminal shows live step-by-step progress with ✓ / ✗ / – icons
nw pipeline status shows history with per-step outcomes
nw install registers cron entries; nw doctor warns if missing
1. Context
Problem: Night Watch jobs (executor, reviewer, QA, audit) run independently with no ability to chain outcomes, so multi-stage automation workflows require manual re-triggering between every stage.
packages/core/src/constants.ts — add DEFAULT_PIPELINE_POLL_INTERVAL_MS = 10_000 and DEFAULT_PIPELINE_STEP_TIMEOUT_MS = 7_200_000
Implementation:
Add to packages/core/src/types.ts:
exporttypePipelineStepStatus='pending'|'running'|'success'|'failure'|'skipped';exporttypePipelineRunStatus='running'|'success'|'failure'|'cancelled';exportinterfaceIPipelineStep{/** Unique identifier for this step within the pipeline */id: string;/** Job type to execute */job: JobType;/** Human-readable label (defaults to id if omitted) */label?: string;/** * When to advance to the next step. * 'success' (default): proceed only if this step status is 'success' * 'failure': proceed only if this step status is 'failure' * 'always': always advance regardless of outcome */continueOn?: 'success'|'failure'|'always';/** Override max polling wait for this step in ms */timeoutMs?: number;}exportinterfaceIPipelineConfig{/** Unique identifier used in CLI (e.g. 'full-cycle') */id: string;/** Human-readable name */name: string;/** Ordered steps executed sequentially */steps: IPipelineStep[];/** Optional cron schedule (e.g. '0 20 * * 1') */schedule?: string;/** * Which pipeline outcomes trigger a webhook notification. * Defaults to ['complete'] when omitted. */notifyOn?: Array<'success'|'failure'|'complete'>;}exportinterfaceIPipelineStepResult{stepId: string;job: JobType;label: string;status: PipelineStepStatus;queueEntryId?: number;startedAt?: number;completedAt?: number;exitCode?: number;}exportinterfaceIPipelineRun{id: string;pipelineId: string;status: PipelineRunStatus;startedAt: number;completedAt?: number;steps: IPipelineStepResult[];}
Add pipelines?: IPipelineConfig[]; to INightWatchConfig.
Append inside the existing db.exec(`` `` ) call in `migrations.ts`:
CREATETABLEIF NOT EXISTS pipeline_runs (
id TEXTPRIMARY KEY,
pipeline_id TEXTNOT NULL,
status TEXTNOT NULL,
started_at INTEGERNOT NULL,
completed_at INTEGER
);
CREATEINDEXIF NOT EXISTS idx_pipeline_runs_lookup
ON pipeline_runs(pipeline_id, started_at DESC);
CREATETABLEIF NOT EXISTS pipeline_step_results (
id INTEGERPRIMARY KEY AUTOINCREMENT,
pipeline_run_id TEXTNOT NULLREFERENCES pipeline_runs(id),
step_id TEXTNOT NULL,
job TEXTNOT NULL,
label TEXTNOT NULL,
status TEXTNOT NULL DEFAULT 'pending',
queue_entry_id INTEGER,
started_at INTEGER,
completed_at INTEGER,
exit_code INTEGER
);
CREATEINDEXIF NOT EXISTS idx_pipeline_steps_run
ON pipeline_step_results(pipeline_run_id, id ASC);
should be idempotent when runMigrations called twice
Second call to runMigrations(db) throws no error
User Verification:
Action: yarn verify
Expected: Compiles cleanly; IPipelineConfig, IPipelineStep, IPipelineRun are importable from @night-watch/core
Phase 2: Pipeline Engine — PipelineRunner.run() executes steps sequentially and persists state
Files (max 5):
packages/core/src/jobs/pipeline-runner.ts (new) — PipelineRunner class with run(), _executeStep(), _pollUntilDone(), and SQLite helpers
packages/core/src/jobs/pipeline-registry.ts (new) — PipelineRegistry with list() and get(id)
packages/core/src/index.ts — export PipelineRunner, PipelineRegistry, and pipeline types
Implementation:
packages/core/src/jobs/pipeline-registry.ts:
importtype{INightWatchConfig,IPipelineConfig}from'../types.js';exportclassPipelineRegistry{constructor(privatereadonlyconfig: INightWatchConfig){}list(): IPipelineConfig[]{returnthis.config.pipelines??[];}get(id: string): IPipelineConfig{constpipeline=this.list().find((p)=>p.id===id);if(!pipeline)thrownewError(`Pipeline "${id}" not found in config`);if(pipeline.steps.length===0)thrownewError(`Pipeline "${id}" has no steps`);returnpipeline;}}
should mark pipeline as failure when required step fails
run.status === 'failure'
Note: Use in-memory SQLite (new Database(':memory:')). Stub _enqueueJob with vi.spyOn to insert a synthetic job_runs row and return its id without a real queue process running.
User Verification:
Action: yarn test packages/core/src/__tests__/jobs/
Expected: All 7 pipeline tests pass
Phase 3: CLI Command — nw pipeline subcommands wired into the program
packages/cli/src/cli.ts — import and register pipelineCommand(program)
packages/cli/src/__tests__/commands/pipeline.test.ts (new) — command unit tests
Implementation:
packages/cli/src/commands/pipeline.ts:
import{Command}from'commander';import{loadConfig}from'@night-watch/core/config.js';import{PipelineRegistry}from'@night-watch/core/jobs/pipeline-registry.js';import{PipelineRunner}from'@night-watch/core/jobs/pipeline-runner.js';importDatabasefrom'better-sqlite3';import*aspathfrom'path';import*asosfrom'os';import{GLOBAL_CONFIG_DIR,STATE_DB_FILE_NAME}from'@night-watch/core/constants.js';import{runMigrations}from'@night-watch/core/storage/sqlite/migrations.js';functionopenDb(): Database.Database{constdbPath=path.join(process.env.NIGHT_WATCH_HOME??path.join(os.homedir(),GLOBAL_CONFIG_DIR),STATE_DB_FILE_NAME,);constdb=newDatabase(dbPath);db.pragma('journal_mode = WAL');db.pragma('busy_timeout = 5000');runMigrations(db);returndb;}exportfunctionpipelineCommand(program: Command): void{constcmd=program.command('pipeline').description('Manage and run sequential job pipelines');cmd.command('list').description('List all configured pipelines').action(async()=>{constconfig=awaitloadConfig();constregistry=newPipelineRegistry(config);constpipelines=registry.list();if(pipelines.length===0){console.log('No pipelines configured. Add a "pipelines" array to night-watch.config.json.');return;}for(constpofpipelines){constchain=p.steps.map((s)=>s.job).join(' → ');console.log(` ${p.id}${p.name}`);console.log(` Steps: ${chain}`);if(p.schedule)console.log(` Schedule: ${p.schedule}`);}});cmd.command('run <id>').description('Run a pipeline by id (blocks until completion)').option('--project-path <path>','Project root to run jobs against',process.cwd()).action(async(id: string,opts: {projectPath: string})=>{constconfig=awaitloadConfig();constregistry=newPipelineRegistry(config);constpipelineConfig=registry.get(id);constdb=openDb();construnner=newPipelineRunner(db,opts.projectPath,config);console.log(`Starting pipeline "${pipelineConfig.name}" (${pipelineConfig.steps.length} steps)\n`);construn=awaitrunner.run(pipelineConfig);for(conststepofrun.steps){consticon=step.status==='success' ? '✓' : step.status==='skipped' ? '–' : '✗';constdur=step.startedAt&&step.completedAt
? ` (${Math.round((step.completedAt-step.startedAt)/1000)}s)` : '';console.log(` ${icon} [${step.job}] ${step.label}${step.status}${dur}`);}console.log(`\nPipeline ${run.status==='success' ? 'completed successfully' : 'failed'}.`);process.exit(run.status==='success' ? 0 : 1);});cmd.command('status [run-id]').description('Show recent pipeline runs or step details for a specific run').action(async(runId?: string)=>{constdb=openDb();if(runId){construn=db.prepare<[string],{id: string;pipeline_id: string;status: string;started_at: number}>(`SELECT * FROM pipeline_runs WHERE id = ?`).get(runId);if(!run){console.error(`No run found: ${runId}`);process.exit(1);}conststeps=db.prepare<[string],{step_id: string;job: string;label: string;status: string}>(`SELECT step_id, job, label, status FROM pipeline_step_results WHERE pipeline_run_id = ? ORDER BY id ASC`).all(runId);console.log(`Pipeline: ${run.pipeline_id} Status: ${run.status}`);for(constsofsteps){consticon=s.status==='success' ? '✓' : s.status==='skipped' ? '–' : '✗';console.log(` ${icon} [${s.job}] ${s.label}${s.status}`);}}else{construns=db.prepare<[],{id: string;pipeline_id: string;status: string;started_at: number}>(`SELECT id, pipeline_id, status, started_at FROM pipeline_runs ORDER BY started_at DESC LIMIT 10`).all();if(runs.length===0){console.log('No pipeline runs recorded yet.');return;}for(constrofruns){console.log(` ${r.id.slice(0,8)}${r.pipeline_id}${r.status}${newDate(r.started_at).toISOString()}`);}}});cmd.command('cancel <run-id>').description('Cancel a running pipeline').action(async(runId: string)=>{constdb=openDb();constresult=db.prepare<[number,string]>(`UPDATE pipeline_runs SET status = 'cancelled', completed_at = ? WHERE id = ? AND status = 'running'`).run(Date.now(),runId);if(result.changes===0){console.error(`No running pipeline found: ${runId}`);process.exit(1);}console.log(`Pipeline run ${runId} cancelled.`);});}
In packages/cli/src/cli.ts:
import{pipelineCommand}from'./commands/pipeline.js';// ... after existing registrations:pipelineCommand(program);
packages/cli/src/commands/doctor.ts — warn when a scheduled pipeline is not installed in crontab
packages/core/src/jobs/pipeline-runner.ts — call sendWebhooks() at end of run() using existing notify.ts
Implementation:
In packages/cli/src/commands/install.ts, after the existing per-job cron loop:
constpipelines=config.pipelines??[];for(constpofpipelines.filter((pipe)=>!!pipe.schedule)){constlogFile=path.join(logDir,`pipeline-${p.id}.log`);constcronLine=`${p.schedule} cd ${projectPath} && nw pipeline run ${p.id} >> ${logFile} 2>&1`;addCronEntry(cronLine,`night-watch-pipeline-${p.id}`);}
In packages/cli/src/commands/doctor.ts, after existing job checks:
constscheduledPipelines=(config.pipelines??[]).filter((p)=>!!p.schedule);for(constpofscheduledPipelines){if(!crontabContains(`night-watch-pipeline-${p.id}`)){issues.push({level: 'warn',message: `Pipeline "${p.id}" has a schedule but is not installed in crontab. Run: nw install`,});}}
In packages/core/src/jobs/pipeline-runner.ts, at the end of run() before returning, add:
should warn when scheduled pipeline not in crontab
Doctor output includes Pipeline "full-cycle" has a schedule but is not installed
User Verification:
Action: Add "schedule": "0 20 * * 1" to a pipeline, run nw install
Expected: crontab -l shows nw pipeline run <id> entry
Action: nw doctor
Expected: No pipeline warnings after a successful nw install
5. Checkpoint Protocol
After completing each phase, spawn the prd-work-reviewer agent:
Task tool:
subagent_type: "prd-work-reviewer"
prompt: "Review checkpoint for phase [N] of PRD at docs/PRDs/night-watch/conditional-job-pipelines.md"
Continue to the next phase only when the agent reports PASS.
Phases 2 and 4 require additional manual verification (real job dispatch into job_queue, webhook delivery) alongside the automated checkpoint.
6. Verification Strategy
Full Test Suite
yarn verify
yarn test packages/core/src/__tests__/storage/migrations.test.ts
yarn test packages/core/src/__tests__/jobs/pipeline-registry.test.ts
yarn test packages/core/src/__tests__/jobs/pipeline-runner.test.ts
yarn test packages/cli/src/__tests__/commands/pipeline.test.ts
yarn test packages/cli/src/__tests__/commands/install.test.ts
yarn test packages/cli/src/__tests__/commands/doctor.test.ts
CLI Smoke Test (after Phase 3)
# Project config contains:# { "pipelines": [{ "id": "full-cycle", "name": "Full Cycle",# "steps": [{ "id": "s1", "job": "slicer" }, { "id": "s2", "job": "executor" }] }] }
nw pipeline list
# Expected: prints "full-cycle Full Cycle" and "slicer → executor"
nw pipeline status
# Expected: "No pipeline runs recorded yet."
End-to-End Verification Checklist
yarn verify passes with zero TypeScript errors and zero lint errors
All 21 new test cases pass
nw pipeline list reads and displays pipelines from config
nw pipeline run <id> dispatches jobs into job_queue, polls job_runs, prints step results
A step failing with continueOn: 'success' marks all downstream steps as skipped
A step failing with continueOn: 'always' does not block the next step
nw pipeline status shows run history with per-step outcome icons
nw pipeline cancel sets status = 'cancelled' in pipeline_runs
nw install registers cron entries only for pipelines with a schedule field
nw doctor warns when a scheduled pipeline is not in crontab
Webhook notification fires on pipeline completion when notifyOn includes 'complete'
7. Acceptance Criteria
All 4 phases complete with all tests passing
yarn verify passes (zero TypeScript errors, zero lint errors)
All automated prd-work-reviewer checkpoints report PASS
nw pipeline command accessible via npx @jonit-dev/night-watch-cli pipeline
Sequential step execution with continueOn logic verified end-to-end
Webhook notification delivered on pipeline completion
Scheduled pipelines appear in crontab after nw install
nw doctor detects uninstalled scheduled pipelines
Feature is not orphaned — pipelineCommand(program) is registered in cli.ts
PRD file:
docs/PRDs/conditional-job-pipelines.mdPRD: Conditional Job Pipelines
Complexity Assessment
Score: 9 → HIGH
Integration Points Checklist
How will this feature be reached?
nw pipeline run <id>(manual), cron entry (scheduled)packages/cli/src/commands/pipeline.ts(new), wired viapackages/cli/src/cli.tspipelineCommand(program)added tocli.ts; scheduled pipelines registered inpackages/cli/src/commands/install.tsIs this user-facing?
nw pipeline list,nw pipeline run,nw pipeline status,nw pipeline cancelFull user flow:
pipelinesarray tonight-watch.config.jsonnw pipeline listto confirm detectionnw pipeline run full-cycle— terminal shows live step-by-step progress with✓ / ✗ / –iconsnw pipeline statusshows history with per-step outcomesnw installregisters cron entries;nw doctorwarns if missing1. Context
Problem: Night Watch jobs (executor, reviewer, QA, audit) run independently with no ability to chain outcomes, so multi-stage automation workflows require manual re-triggering between every stage.
Files Analyzed:
packages/core/src/types.ts—INightWatchConfig,JobTypepackages/core/src/storage/sqlite/migrations.ts—CREATE TABLE IF NOT EXISTS/ALTER TABLEpatternpackages/core/src/utils/job-queue.ts—job_queue+job_runstables,openDb(), polling patternpackages/core/src/utils/notify.ts—INotificationContext,sendWebhooks()packages/core/src/jobs/job-registry.ts—JOB_REGISTRY,IJobDefinitionpackages/core/src/constants.ts—DEFAULT_*constant patternpackages/cli/src/cli.ts— Commander command registration patternpackages/cli/src/commands/install.ts— cron entry registrationpackages/cli/src/commands/doctor.ts— health check patternpackages/cli/src/commands/queue.ts— subcommand pattern referenceCurrent Behavior:
job_queuedispatches jobs;job_runstracks per-job status (queued/running/success/failure/timeout)runMigrations()runsdb.exec()idempotently — safe to extend with new tablesINotificationContextwithevent,projectName,exitCode2. Solution
Approach:
IPipelineConfig[]toINightWatchConfig— each pipeline is an ordered list ofJobTypesteps with acontinueOnpolicypipeline_runs,pipeline_step_results) store run state, linking steps tojob_runsviaqueue_entry_idPipelineRunnerorchestrates steps sequentially: insert intojob_queue→ polljob_runsevery 10 s → evaluatecontinueOn→ advance or abortnw pipelineCLI command exposeslist,run,status,cancelsubcommandsinstall.tsregisters cron entries for pipelines with aschedulefield;doctor.tswarns when missingArchitecture Diagram:
flowchart LR Config[night-watch.config.json] --> Registry[PipelineRegistry] Registry --> Runner[PipelineRunner] Runner --> Queue[(job_queue table)] Runner --> Poller[Step Poller\n10 s interval] Poller --> JobRuns[(job_runs table)] Poller --> Runner Runner --> PipelineDB[(pipeline_runs\npipeline_step_results)] Runner --> Notify[notify.ts webhooks] CLI[nw pipeline run] --> Runner Cron[cron entry] --> CLIKey Decisions:
job_queuedispatch andjob_runspolling — no new execution mechanism neededcontinueOn: 'success' | 'failure' | 'always'— declarative policy, noeval()DEFAULT_PIPELINE_POLL_INTERVAL_MS = 10_000— zero network cost, matches existing patternsrandomUUID()from Nodecrypto— no extra dependencyData Changes:
3. Sequence Flow
sequenceDiagram participant CLI as nw pipeline run participant Runner as PipelineRunner participant Queue as job_queue table participant Runs as job_runs table participant DB as pipeline_* tables CLI->>Runner: run(pipelineConfig, projectPath) Runner->>DB: INSERT pipeline_runs (id, status=running) loop For each step Runner->>DB: INSERT pipeline_step_results (status=pending) Runner->>Queue: INSERT job_queue row (job_type, project_path) Queue-->>Runner: queueEntryId Runner->>DB: UPDATE step status=running, queue_entry_id loop Poll every 10 s Runner->>Runs: SELECT status WHERE queue_entry_id=? Runs-->>Runner: queued | running | success | failure | timeout end Runner->>DB: UPDATE step status=success|failure, completed_at alt continueOn mismatch Runner->>DB: UPDATE remaining steps status=skipped Runner->>DB: UPDATE pipeline_runs status=failure Runner-->>CLI: exit 1 else continueOn satisfied note over Runner: advance to next step end end Runner->>DB: UPDATE pipeline_runs status=success Runner->>Notify: sendWebhooks(context) Runner-->>CLI: exit 04. Execution Phases
Phase 1: Data Layer — Types compile and DB schema migrates
Files (max 5):
packages/core/src/types.ts— addIPipelineStep,IPipelineConfig,IPipelineRun,IPipelineStepResult,PipelineRunStatus,PipelineStepStatus; addpipelines?: IPipelineConfig[]toINightWatchConfigpackages/core/src/storage/sqlite/migrations.ts— appendpipeline_runs+pipeline_step_resultsDDL inside the existingdb.exec()template stringpackages/core/src/constants.ts— addDEFAULT_PIPELINE_POLL_INTERVAL_MS = 10_000andDEFAULT_PIPELINE_STEP_TIMEOUT_MS = 7_200_000Implementation:
Add to
packages/core/src/types.ts:Add
pipelines?: IPipelineConfig[];toINightWatchConfig.Append inside the existing
db.exec(````)call in `migrations.ts`:Tests Required:
packages/core/src/__tests__/storage/migrations.test.tsshould create pipeline_runs tablePRAGMA table_info(pipeline_runs)returns rows includingid,pipeline_id,statuspackages/core/src/__tests__/storage/migrations.test.tsshould create pipeline_step_results tablePRAGMA table_info(pipeline_step_results)includespipeline_run_idcolumnpackages/core/src/__tests__/storage/migrations.test.tsshould be idempotent when runMigrations called twicerunMigrations(db)throws no errorUser Verification:
yarn verifyIPipelineConfig,IPipelineStep,IPipelineRunare importable from@night-watch/corePhase 2: Pipeline Engine —
PipelineRunner.run()executes steps sequentially and persists stateFiles (max 5):
packages/core/src/jobs/pipeline-runner.ts(new) —PipelineRunnerclass withrun(),_executeStep(),_pollUntilDone(), and SQLite helperspackages/core/src/jobs/pipeline-registry.ts(new) —PipelineRegistrywithlist()andget(id)packages/core/src/index.ts— exportPipelineRunner,PipelineRegistry, and pipeline typesImplementation:
packages/core/src/jobs/pipeline-registry.ts:packages/core/src/jobs/pipeline-runner.ts— key structure:Tests Required:
packages/core/src/__tests__/jobs/pipeline-registry.test.tsshould return pipeline by idregistry.get('p1')returns config withid === 'p1'packages/core/src/__tests__/jobs/pipeline-registry.test.tsshould throw when pipeline id not foundregistry.get('missing')throwsError: Pipeline "missing" not foundpackages/core/src/__tests__/jobs/pipeline-registry.test.tsshould throw when pipeline has no stepssteps: []throws onregistry.get('empty')packages/core/src/__tests__/jobs/pipeline-runner.test.tsshould mark run as success when all steps succeedjob_runsreturnsstatus='success';run.status === 'success'packages/core/src/__tests__/jobs/pipeline-runner.test.tsshould skip remaining steps when continueOn success and step failscontinueOn:'success'; step 3 hasstatus === 'skipped'packages/core/src/__tests__/jobs/pipeline-runner.test.tsshould advance when continueOn is always and step failscontinueOn:'always'; step 3 executespackages/core/src/__tests__/jobs/pipeline-runner.test.tsshould mark pipeline as failure when required step failsrun.status === 'failure'Note: Use in-memory SQLite (
new Database(':memory:')). Stub_enqueueJobwithvi.spyOnto insert a syntheticjob_runsrow and return its id without a real queue process running.User Verification:
yarn test packages/core/src/__tests__/jobs/Phase 3: CLI Command —
nw pipelinesubcommands wired into the programFiles (max 5):
packages/cli/src/commands/pipeline.ts(new) — Commander subcommand:list,run,status,cancelpackages/cli/src/cli.ts— import and registerpipelineCommand(program)packages/cli/src/__tests__/commands/pipeline.test.ts(new) — command unit testsImplementation:
packages/cli/src/commands/pipeline.ts:In
packages/cli/src/cli.ts:Tests Required:
packages/cli/src/__tests__/commands/pipeline.test.tspipeline list should print pipeline id and step chainfull-cycleandslicer → executorpackages/cli/src/__tests__/commands/pipeline.test.tspipeline list should show no-config message when emptyNo pipelines configuredpackages/cli/src/__tests__/commands/pipeline.test.tspipeline run should invoke runner.run with matching configPipelineRunner.prototype.runspy called withconfig.id === 'full-cycle'packages/cli/src/__tests__/commands/pipeline.test.tspipeline run should exit 0 when status is successprocess.exitspy called with0packages/cli/src/__tests__/commands/pipeline.test.tspipeline run should exit 1 when status is failureprocess.exitspy called with1packages/cli/src/__tests__/commands/pipeline.test.tspipeline status without run-id should list recent runspackages/cli/src/__tests__/commands/pipeline.test.tspipeline cancel should set status to cancelled in DBstatus = 'cancelled'after command executesNote: Mock
loadConfigviavi.mock, stubPipelineRunnerconstructor, use in-memory SQLite.User Verification:
nw pipeline listin a configured projectslicer → executor → reviewer → qa)nw pipeline run full-cycle(staging env or mocked steps)✓ [executor] implement success, exits 0Phase 4: Scheduling + Notifications — pipelines run on cron and deliver webhooks
Files (max 5):
packages/cli/src/commands/install.ts— detectpipelines[].schedule, registernw pipeline run <id>cron entriespackages/cli/src/commands/doctor.ts— warn when a scheduled pipeline is not installed in crontabpackages/core/src/jobs/pipeline-runner.ts— callsendWebhooks()at end ofrun()using existingnotify.tsImplementation:
In
packages/cli/src/commands/install.ts, after the existing per-job cron loop:In
packages/cli/src/commands/doctor.ts, after existing job checks:In
packages/core/src/jobs/pipeline-runner.ts, at the end ofrun()before returning, add:Add
'pipeline_complete'to theNotificationEventunion type inpackages/core/src/types.ts.Tests Required:
packages/cli/src/__tests__/commands/install.test.tsshould register cron entry for pipeline with schedulenw pipeline run full-cyclepackages/cli/src/__tests__/commands/install.test.tsshould not register cron entry for pipeline without schedulenw pipeline runentry whenscheduleis undefinedpackages/core/src/__tests__/jobs/pipeline-runner.test.tsshould call sendWebhooks on complete when notifyOn includes completesendWebhooksspy called once after successful runpackages/core/src/__tests__/jobs/pipeline-runner.test.tsshould not call sendWebhooks when notifyOn is emptysendWebhooksspy not called whennotifyOn: []packages/cli/src/__tests__/commands/doctor.test.tsshould warn when scheduled pipeline not in crontabPipeline "full-cycle" has a schedule but is not installedUser Verification:
"schedule": "0 20 * * 1"to a pipeline, runnw installcrontab -lshowsnw pipeline run <id>entrynw doctornw install5. Checkpoint Protocol
After completing each phase, spawn the
prd-work-revieweragent:Continue to the next phase only when the agent reports PASS.
Phases 2 and 4 require additional manual verification (real job dispatch into
job_queue, webhook delivery) alongside the automated checkpoint.6. Verification Strategy
Full Test Suite
CLI Smoke Test (after Phase 3)
End-to-End Verification Checklist
yarn verifypasses with zero TypeScript errors and zero lint errorsnw pipeline listreads and displays pipelines from confignw pipeline run <id>dispatches jobs intojob_queue, pollsjob_runs, prints step resultscontinueOn: 'success'marks all downstream steps asskippedcontinueOn: 'always'does not block the next stepnw pipeline statusshows run history with per-step outcome iconsnw pipeline cancelsetsstatus = 'cancelled'inpipeline_runsnw installregisters cron entries only for pipelines with aschedulefieldnw doctorwarns when a scheduled pipeline is not in crontabnotifyOnincludes'complete'7. Acceptance Criteria
yarn verifypasses (zero TypeScript errors, zero lint errors)prd-work-reviewercheckpoints report PASSnw pipelinecommand accessible vianpx @jonit-dev/night-watch-cli pipelinecontinueOnlogic verified end-to-endnw installnw doctordetects uninstalled scheduled pipelinespipelineCommand(program)is registered incli.tsCreated via
night-watch prd create.