Skip to content

Commit 104751f

Browse files
jonit-devTest Userclaude
authored
feat: add e2e-validated label for automated acceptance proof on PRs (#91)
* chore: organize PRDs and create architecture docs Move completed PRDs to done/ folder: - job-registry.md (Job Registry implementation) - analytics-job.md (Analytics job implementation) - fix-executor-streaming-output.md (Terminal streaming fix) - fix-prd-execution-failures.md (Double rate-limit detection) - open-source-readiness.md (OSS community files) - remove-legacy-personas-and-filesystem-prd.md (Dead code cleanup) - prd-provider-schedule-overrides.md (Provider schedule overrides) - provider-agnostic-instructions.md (Instructions directory refactoring) - night-watch/provider-aware-queue.md (Per-bucket concurrency) Delete obsolete PRD: - refactor-interaction-listener.md (packages/slack/ removed in commit 46637a0) Add architecture docs: - docs/architecture/job-registry.md (Job registry architecture) - docs/architecture/analytics-job.md (Analytics job architecture) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: add e2e-validated label for automated acceptance proof on PRs Implements the E2E Validated Label PRD across 4 phases: Phase 1 — Label Definition & Config: - Add 'e2e-validated' (green #0e8a16) to NIGHT_WATCH_LABELS in labels.ts - Add validatedLabel: string to IQaConfig in types.ts - Add DEFAULT_QA_VALIDATED_LABEL constant and update DEFAULT_QA in constants.ts - Add validatedLabel extra field to QA job definition in job-registry.ts - Config normalization picks it up automatically via normalizeJobConfig Phase 2 — QA Script Integration: - Pass NW_QA_VALIDATED_LABEL env var from qa.ts buildEnvVars() - Show Validated Label in qa --dry-run config table - Read VALIDATED_LABEL in QA bash script with e2e-validated default - Add ensure_validated_label() helper (idempotent gh label create --force) - Apply label on 'passing' outcome, remove on 'issues_found'/'no_tests_needed' - Track VALIDATED_PRS_CSV and include in emit_result and Telegram messages Phase 3 — Init Label Sync: - Add step 11 to night-watch init: sync all NIGHT_WATCH_LABELS to GitHub - Skips gracefully when no GitHub remote or gh not authenticated - Updates totalSteps from 13 to 14 and adds Label Sync to summary table Phase 4 — Dry-Run & Summary Integration: - Validated Label shown in qa --dry-run config table (merged into Phase 2) - emit_result includes validated= field for all success/warning outcomes Tests: - packages/core/src/__tests__/board/labels.test.ts (new): e2e-validated presence - packages/core/src/__tests__/jobs/job-registry.test.ts: validatedLabel extra field - packages/cli/src/__tests__/commands/qa.test.ts: NW_QA_VALIDATED_LABEL env var - packages/cli/src/__tests__/commands/init.test.ts: label sync preconditions Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> --------- Co-authored-by: Test User <test@test.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
1 parent a3f214a commit 104751f

25 files changed

Lines changed: 1458 additions & 314 deletions
File renamed without changes.

docs/PRDs/fix-executor-streaming-output.md renamed to docs/PRDs/done/fix-executor-streaming-output.md

Lines changed: 18 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99
**Problem:** When the executor launches `claude -p`, it logs "output will stream below" but no output actually streams to the terminal — all output is silently redirected to the log file via `>> "${LOG_FILE}" 2>&1`.
1010

1111
**Files Analyzed:**
12+
1213
- `scripts/night-watch-helpers.sh``log()` function (writes ONLY to file)
1314
- `scripts/night-watch-cron.sh` — provider dispatch (lines 545-577, 624-637)
1415
- `scripts/night-watch-audit-cron.sh` — provider dispatch (lines 163-189)
@@ -17,6 +18,7 @@
1718
- `packages/core/src/utils/shell.ts``executeScriptWithOutput()` (already streams child stdout/stderr to terminal)
1819

1920
**Current Behavior:**
21+
2022
- `log()` writes ONLY to `LOG_FILE` (`echo ... >> "${log_file}"`) — not to stdout or stderr
2123
- Provider commands redirect ALL output to file: `claude -p ... >> "${LOG_FILE}" 2>&1`
2224
- Node's `executeScriptWithOutput()` listens on the bash child's stdout/stderr pipes but receives nothing because the bash script sends everything to the file
@@ -25,11 +27,13 @@
2527
## 2. Solution
2628

2729
**Approach:**
30+
2831
- Replace `>> "${LOG_FILE}" 2>&1` with `2>&1 | tee -a "${LOG_FILE}"` for provider dispatch — output goes to both the log file AND stdout (which propagates through Node's pipe to the terminal)
2932
- Modify `log()` to also write to stderr so diagnostic messages are visible in the terminal during interactive `night-watch run`
3033
- All scripts already use `set -euo pipefail`, so pipe exit codes propagate correctly (if `claude` fails with code 1 and `tee` succeeds with 0, pipefail returns 1)
3134

3235
**Key Decisions:**
36+
3337
- `tee -a` (append mode) preserves the existing log file behavior
3438
- Provider output goes to stdout via tee; diagnostic messages go to stderr via log — keeps them on separate channels
3539
- No changes to `executeScriptWithOutput()` needed — it already streams both pipes to the terminal
@@ -41,6 +45,7 @@
4145
### Phase 1: Fix `log()` to also write to stderr + use `tee` for provider output
4246

4347
**Files (5):**
48+
4449
- `scripts/night-watch-helpers.sh` — make `log()` also write to stderr
4550
- `scripts/night-watch-cron.sh` — replace `>> "${LOG_FILE}" 2>&1` with `2>&1 | tee -a "${LOG_FILE}"` (3 occurrences: main dispatch, codex dispatch, fallback)
4651
- `scripts/night-watch-audit-cron.sh` — same replacement (2 occurrences)
@@ -50,6 +55,7 @@
5055
**Implementation:**
5156

5257
- [ ] In `night-watch-helpers.sh`, modify `log()` to also echo to stderr:
58+
5359
```bash
5460
log() {
5561
local log_file="${LOG_FILE:?LOG_FILE not set}"
@@ -82,6 +88,7 @@
8288
**Pattern for each replacement:**
8389

8490
Before:
91+
8592
```bash
8693
if (
8794
cd "${WORKTREE_DIR}" && timeout "${SESSION_MAX_RUNTIME}" \
@@ -92,6 +99,7 @@ if (
9299
```
93100
94101
After:
102+
95103
```bash
96104
if (
97105
cd "${WORKTREE_DIR}" && timeout "${SESSION_MAX_RUNTIME}" \
@@ -102,13 +110,15 @@ if (
102110
```
103111
104112
**Exit code behavior with `pipefail`:**
113+
105114
- All scripts use `set -euo pipefail` (line 2)
106115
- If `timeout ... claude` exits 124 (timeout) and `tee` exits 0 → pipe returns 124 ✓
107116
- If `timeout ... claude` exits 1 (failure) and `tee` exits 0 → pipe returns 1 ✓
108117
- If `timeout ... claude` exits 0 (success) and `tee` exits 0 → pipe returns 0 ✓
109118
- The `if (...); then` construct disables `set -e` for the condition, so non-zero exits are captured correctly
110119
111120
**Rate-limit detection still works:**
121+
112122
- `check_rate_limited` greps the LOG_FILE — `tee -a` still writes everything to the file, so this is unchanged
113123
114124
**Tests Required:**
@@ -120,6 +130,7 @@ if (
120130
| Manual | Smoke test: `bash -n scripts/night-watch-pr-reviewer-cron.sh` | No syntax errors |
121131
122132
**User Verification:**
133+
123134
- Action: Run `night-watch run` (or trigger executor)
124135
- Expected: Diagnostic log messages AND claude's streaming output visible in the terminal in real time
125136
@@ -139,10 +150,10 @@ if (
139150
140151
## Files to Modify
141152
142-
| File | Change |
143-
|------|--------|
144-
| `scripts/night-watch-helpers.sh` | `log()` also writes to stderr |
145-
| `scripts/night-watch-cron.sh` | 3× replace `>> LOG 2>&1` with `2>&1 \| tee -a LOG` |
146-
| `scripts/night-watch-audit-cron.sh` | 2× same replacement |
147-
| `scripts/night-watch-qa-cron.sh` | 2× same replacement |
148-
| `scripts/night-watch-pr-reviewer-cron.sh` | 2× same replacement |
153+
| File | Change |
154+
| ----------------------------------------- | -------------------------------------------------- |
155+
| `scripts/night-watch-helpers.sh` | `log()` also writes to stderr |
156+
| `scripts/night-watch-cron.sh` | 3× replace `>> LOG 2>&1` with `2>&1 \| tee -a LOG` |
157+
| `scripts/night-watch-audit-cron.sh` | 2× same replacement |
158+
| `scripts/night-watch-qa-cron.sh` | 2× same replacement |
159+
| `scripts/night-watch-pr-reviewer-cron.sh` | 2× same replacement |

docs/PRDs/fix-prd-execution-failures.md renamed to docs/PRDs/done/fix-prd-execution-failures.md

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ PRD execution is failing consistently across projects (night-watch-cli, autopilo
1414
When the proxy returns 429, the system correctly triggers a native Claude fallback. **However, if native Claude is also rate-limited**, the fallback exits with code 1 and the system records `provider_exit` instead of `rate_limited`.
1515

1616
**Evidence from `logs/executor.log`:**
17+
1718
```
1819
API Error: 429 {"error":{"code":"1308","message":"Usage limit reached for 5 hour..."}}
1920
RATE-LIMITED: Proxy quota exhausted — triggering native Claude fallback
@@ -24,6 +25,7 @@ FAIL: Night watch exited with code 1 while processing 69-ux-revamp...
2425
```
2526

2627
**Impact:** The system records a `failure` with `reason=provider_exit` instead of `rate_limited`, which:
28+
2729
- Triggers a long cooldown (max_runtime-based) instead of a rate-limit-appropriate retry
2830
- Sends misleading failure notifications
2931
- Prevents the PRD from being retried once the rate limit resets
@@ -33,6 +35,7 @@ FAIL: Night watch exited with code 1 while processing 69-ux-revamp...
3335
The function scans `tail -50` of the shared `executor.log` file, but log entries from **previous runs** can bleed into the current run's error detail.
3436

3537
**Evidence:** Issue #70's failure detail contains issue #69's error message:
38+
3639
```
3740
detail=[2026-03-07 00:40:59] [PID:75449] FAIL: Night watch exited with code 1 while processing 69-ux-revamp...
3841
```
@@ -44,6 +47,7 @@ This happens because the log is append-only and `latest_failure_detail()` doesn'
4447
In filesystem mode, `code-cleanup-q1-2026.md` was selected and executed despite the work already being merged to master. Claude correctly identified the work was done but didn't create a PR. The cron script then recorded `failure_no_pr_after_success`.
4548

4649
**Evidence:**
50+
4751
```
4852
OUTCOME: exit_code=0 total_elapsed=363s prd=code-cleanup-q1-2026.md
4953
WARN: claude exited 0 but no open/merged PR found on night-watch/code-cleanup-q1-2026
@@ -62,6 +66,7 @@ This is a pre-existing filesystem mode issue (stale PRDs not moved to `done/`).
6266
After the native Claude fallback runs (line ~626), check if the fallback also hit a rate limit before falling through to the generic failure handler.
6367

6468
**Implementation:**
69+
6570
1. After `RATE_LIMIT_FALLBACK_TRIGGERED` block (lines 603-632), if `EXIT_CODE != 0`, scan fallback output for rate-limit indicators (`"hit your limit"`, `429`, `"Usage limit"`)
6671
2. If detected, set a new flag `DOUBLE_RATE_LIMITED=1`
6772
3. In the outcome handler (lines 711-726), when `DOUBLE_RATE_LIMITED=1`:
@@ -72,6 +77,7 @@ After the native Claude fallback runs (line ~626), check if the fallback also hi
7277
**Specific changes in `night-watch-cron.sh`:**
7378

7479
After line 632 (`fi` closing the fallback block), add:
80+
7581
```bash
7682
# Detect double rate-limit: both proxy AND native Claude exhausted
7783
DOUBLE_RATE_LIMITED=0
@@ -84,6 +90,7 @@ fi
8490
```
8591

8692
In the outcome handler, add a new branch before the generic `else` on line 711:
93+
8794
```bash
8895
elif [ "${DOUBLE_RATE_LIMITED}" = "1" ]; then
8996
if [ -n "${ISSUE_NUMBER}" ]; then
@@ -102,13 +109,15 @@ elif [ "${DOUBLE_RATE_LIMITED}" = "1" ]; then
102109
Modify `latest_failure_detail()` to accept an optional `since_line` parameter that filters to only lines written during the current run.
103110

104111
**Implementation:**
112+
105113
1. Change `latest_failure_detail()` (lines 79-92) to accept a second parameter `since_line`
106114
2. Use `tail -n +${since_line}` instead of `tail -50` when `since_line` is provided
107115
3. At the call site (line 712), pass the `LOG_LINE_BEFORE` captured at the start of the current attempt
108116

109117
**Specific changes:**
110118

111119
Replace `latest_failure_detail()`:
120+
112121
```bash
113122
latest_failure_detail() {
114123
local log_file="${1:?log_file required}"
@@ -138,6 +147,7 @@ latest_failure_detail() {
138147
```
139148

140149
Update call site at line 712:
150+
141151
```bash
142152
PROVIDER_ERROR_DETAIL=$(latest_failure_detail "${LOG_FILE}" "${LOG_LINE_BEFORE}")
143153
```
@@ -182,6 +192,6 @@ This is already handled — the `code-cleanup-q1-2026.md` issue was a one-time s
182192

183193
## Files to Modify
184194

185-
| File | Change |
186-
|------|--------|
195+
| File | Change |
196+
| ----------------------------- | -------------------------------------------------------------------- |
187197
| `scripts/night-watch-cron.sh` | Add double-rate-limit detection, scope failure detail to current run |
Lines changed: 27 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77
**Problem:** Adding a new job type (e.g., analytics) requires touching 15+ files across 4 packages — types, constants, config normalization, env parsing, CLI command, server routes, API client, Scheduling UI, Settings UI, schedule templates, and more. Each job's state shape is inconsistent (executor/reviewer use top-level flat fields, qa/audit/analytics use nested config objects, slicer lives inside `roadmapScanner`).
88

99
**Files Analyzed:**
10+
1011
- `packages/core/src/types.ts``JobType`, `IJobProviders`, `INightWatchConfig`, `IQaConfig`, `IAuditConfig`, `IAnalyticsConfig`
1112
- `packages/core/src/shared/types.ts` — duplicated type definitions for web contract
1213
- `packages/core/src/constants.ts``DEFAULT_*` per job, `VALID_JOB_TYPES`, `DEFAULT_QUEUE_PRIORITY`, `LOG_FILE_NAMES`
@@ -22,6 +23,7 @@
2223
- `web/store/useStore.ts` — minimal Zustand, no job-specific state
2324

2425
**Current Behavior:**
26+
2527
- 6 job types exist: `executor`, `reviewer`, `qa`, `audit`, `slicer`, `analytics`
2628
- Executor/reviewer use flat top-level config fields (`cronSchedule`, `reviewerSchedule`, `executorEnabled`, `reviewerEnabled`)
2729
- QA/audit/analytics use nested config objects (`config.qa`, `config.audit`, `config.analytics`) with common shape: `{ enabled, schedule, maxRuntime, ...extras }`
@@ -32,13 +34,15 @@
3234
## 2. Solution
3335

3436
**Approach:**
37+
3538
1. Create a **Job Registry** in `packages/core/src/jobs/` that defines each job's metadata, defaults, config access patterns, and env parsing rules in a single object
3639
2. Extract a **`IBaseJobConfig`** interface (`{ enabled, schedule, maxRuntime }`) that all job configs extend
3740
3. Replace per-job boilerplate in `config-normalize.ts` and `config-env.ts` with generic registry-driven loops
3841
4. Create a **web-side Job Registry** (`web/utils/jobs.ts`) with UI metadata (icons, labels, trigger functions, config accessors) that Scheduling/Settings pages iterate over
3942
5. Add a **Zustand `jobs` slice** that provides computed job state derived from `status.config` so components don't need to know each job's config shape
4043

4144
**Architecture Diagram:**
45+
4246
```mermaid
4347
flowchart TB
4448
subgraph Core["@night-watch/core"]
@@ -63,12 +67,14 @@ flowchart TB
6367
```
6468

6569
**Key Decisions:**
70+
6671
- **Migrate executor/reviewer to nested config**: All jobs will use `config.jobs.{id}: { enabled, schedule, maxRuntime, ...extras }`. Auto-detect legacy flat format and migrate on load. This is a breaking config change but gives uniform access patterns.
6772
- **Registry is a const array, not DI**: Simple, testable, no runtime overhead
6873
- **Web job registry stores React components directly** for icons (type-safe, tree-shakeable)
6974
- **Generic `triggerJob(jobId)`** replaces per-job `triggerRun()`, `triggerReview()` etc. (keep old functions as thin wrappers for backward compat)
7075

7176
**Data Changes:**
77+
7278
- `INightWatchConfig` gains `jobs: Record<JobType, IBaseJobConfig & extras>` — replaces flat executor/reviewer fields and nested qa/audit/analytics objects
7379
- Legacy flat fields (`cronSchedule`, `reviewerSchedule`, `executorEnabled`, `reviewerEnabled`) and nested objects (`qa`, `audit`, `analytics`, `roadmapScanner.slicerSchedule`) auto-detected and migrated on config load
7480
- Config file rewritten in new format on first save after migration
@@ -100,6 +106,7 @@ sequenceDiagram
100106
**User-visible outcome:** Job registry exists and is the single source of truth for job metadata. All constants derived from it. Tests prove registry drives normalization.
101107

102108
**Files (5):**
109+
103110
- `packages/core/src/jobs/job-registry.ts`**NEW**`IJobDefinition` interface + `JOB_REGISTRY` array + accessor utilities
104111
- `packages/core/src/jobs/index.ts`**NEW** — barrel exports
105112
- `packages/core/src/types.ts` — add `IBaseJobConfig` interface
@@ -110,21 +117,22 @@ sequenceDiagram
110117

111118
- [ ] Define `IBaseJobConfig` interface: `{ enabled: boolean; schedule: string; maxRuntime: number }`
112119
- [ ] Define `IJobDefinition<TConfig extends IBaseJobConfig = IBaseJobConfig>` interface with:
120+
113121
```typescript
114122
interface IJobDefinition<TConfig extends IBaseJobConfig = IBaseJobConfig> {
115123
id: JobType;
116-
name: string; // "Executor", "QA", "Auditor"
117-
description: string; // "Creates implementation PRs from PRDs"
118-
cliCommand: string; // "run", "review", "qa", "audit", "planner", "analytics"
119-
logName: string; // "executor", "reviewer", "night-watch-qa", etc.
120-
lockSuffix: string; // ".lock", "-r.lock", "-qa.lock", etc.
121-
queuePriority: number; // 50, 40, 30, 20, 10
124+
name: string; // "Executor", "QA", "Auditor"
125+
description: string; // "Creates implementation PRs from PRDs"
126+
cliCommand: string; // "run", "review", "qa", "audit", "planner", "analytics"
127+
logName: string; // "executor", "reviewer", "night-watch-qa", etc.
128+
lockSuffix: string; // ".lock", "-r.lock", "-qa.lock", etc.
129+
queuePriority: number; // 50, 40, 30, 20, 10
122130

123131
// Env var prefix for NW_* overrides
124-
envPrefix: string; // "NW_EXECUTOR", "NW_QA", "NW_AUDIT", etc.
132+
envPrefix: string; // "NW_EXECUTOR", "NW_QA", "NW_AUDIT", etc.
125133

126134
// Extra config field normalizers (beyond enabled/schedule/maxRuntime)
127-
extraFields?: IExtraFieldDef[]; // e.g., QA's branchPatterns, artifacts, etc.
135+
extraFields?: IExtraFieldDef[]; // e.g., QA's branchPatterns, artifacts, etc.
128136

129137
// Defaults
130138
defaultConfig: TConfig;
@@ -133,6 +141,7 @@ sequenceDiagram
133141
migrateLegacy?: (raw: Record<string, unknown>) => Partial<TConfig> | undefined;
134142
}
135143
```
144+
136145
- [ ] Create `JOB_REGISTRY` const array with entries for all 6 job types
137146
- [ ] Create utility functions: `getJobDef(id)`, `getAllJobDefs()`, `getJobDefByCommand(cmd)`
138147
- [ ] Derive `VALID_JOB_TYPES`, `DEFAULT_QUEUE_PRIORITY`, `LOG_FILE_NAMES` from registry (keep exports stable)
@@ -154,13 +163,15 @@ sequenceDiagram
154163
**User-visible outcome:** `config-normalize.ts` and `config-env.ts` use generic loops. Legacy flat config auto-migrated. Adding a job config section no longer requires per-job blocks.
155164

156165
**Files (5):**
166+
157167
- `packages/core/src/jobs/job-registry.ts` — add `normalizeJobConfig()` and `buildJobEnvOverrides()` generic helpers
158168
- `packages/core/src/config-normalize.ts` — replace per-job normalization blocks with registry loop + legacy migration
159169
- `packages/core/src/config-env.ts` — replace per-job env blocks with registry loop
160170
- `packages/core/src/config.ts``loadConfig()` detects legacy format and migrates in-memory (optionally rewrites file)
161171
- `packages/core/src/__tests__/config-normalize.test.ts` — verify normalization + migration
162172

163173
**Implementation:**
174+
164175
- [ ] Add `normalizeJobConfig(rawConfig, jobDef)` that reads raw object, applies defaults, validates fields
165176
- [ ] Each `IJobDefinition` declares `extraFields` for job-specific fields beyond `{ enabled, schedule, maxRuntime }`
166177
- [ ] Add `migrateLegacyConfig(raw)` that detects old format (e.g., `cronSchedule` exists at top level) and transforms to new `jobs: { executor: { ... }, ... }` shape
@@ -184,17 +195,19 @@ sequenceDiagram
184195
**User-visible outcome:** Scheduling and Settings pages read job definitions from a registry instead of hardcoded arrays. Zustand provides computed job state.
185196

186197
**Files (4):**
198+
187199
- `web/utils/jobs.ts`**NEW** — Web-side job registry with UI metadata
188200
- `web/store/useStore.ts` — add `jobs` computed slice derived from `status.config`
189201
- `web/api.ts` — add generic `triggerJob(jobId)` function
190202
- `web/utils/cron.ts` — derive schedule template keys from registry
191203

192204
**Implementation:**
205+
193206
- [ ] Create `IWebJobDefinition` extending core `IJobDefinition` with UI fields:
194207
```typescript
195208
interface IWebJobDefinition extends IJobDefinition {
196-
icon: string; // lucide icon component name
197-
triggerEndpoint: string; // '/api/actions/qa'
209+
icon: string; // lucide icon component name
210+
triggerEndpoint: string; // '/api/actions/qa'
198211
scheduleTemplateKey: string; // key in IScheduleTemplate.schedules
199212
settingsSection?: 'general' | 'advanced'; // where in Settings to show
200213
}
@@ -223,11 +236,13 @@ sequenceDiagram
223236
**User-visible outcome:** Scheduling page renders job cards from the registry. Adding a new job automatically shows it in Scheduling.
224237

225238
**Files (3):**
239+
226240
- `web/pages/Scheduling.tsx` — replace hardcoded `agents` array with registry-driven rendering
227241
- `web/components/scheduling/ScheduleConfig.tsx` — use registry for form fields
228242
- `web/utils/cron.ts` — update `IScheduleTemplate` to be extensible
229243

230244
**Implementation:**
245+
231246
- [ ] Replace the hardcoded `agents: IAgentInfo[]` array with `WEB_JOB_REGISTRY.map(job => ...)`
232247
- [ ] Replace `handleJobToggle` if/else chain with generic `job.buildEnabledPatch(enabled, config)`
233248
- [ ] Replace `handleTriggerJob` map with generic `triggerJob(job.id)`
@@ -247,11 +262,13 @@ sequenceDiagram
247262
**User-visible outcome:** Settings page job config sections rendered from registry. Adding a new job auto-shows its settings.
248263

249264
**Files (3):**
265+
250266
- `web/pages/Settings.tsx` — replace per-job settings JSX blocks with registry loop
251267
- `web/components/dashboard/AgentStatusBar.tsx` — use registry for process status
252268
- `packages/server/src/routes/action.routes.ts` — generate routes from registry
253269

254270
**Implementation:**
271+
255272
- [ ] Settings: iterate `WEB_JOB_REGISTRY` to render job config sections
256273
- [ ] Each `IWebJobDefinition` can declare its settings fields: `settingsFields: ISettingsField[]`
257274
- [ ] `AgentStatusBar`: derive process list from registry instead of hardcoded

0 commit comments

Comments
 (0)