Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0c06fac939
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (stalePolicy === "discard-and-retry") | ||
| return this.compute(granularPatch as ComputeInput<Root>); |
There was a problem hiding this comment.
Avoid replaying stale patch in discard-and-retry
When stale work is detected, this branch retries by calling compute(granularPatch) from the stale run. That patch already contains old input/computed values, so with stalePolicy: "discard-and-retry" and overlapping computes (for example, first call with older inputs and a second call with newer inputs), the retry can re-enqueue stale data and overwrite newer state. The retry should be driven from the latest pending/base state, not the stale run’s patch.
Useful? React with 👍 / 👎.
| for (const outcome of outcomes) { | ||
| const pending = receivers.get(outcome.id); | ||
| if (!pending) continue; |
There was a problem hiding this comment.
Settle batch entries missing from query outcomes
This code only resolves/rejects entries whose IDs appear in outcomes; any queued entry omitted by config.query is left unresolved forever. In that partial-response scenario, the corresponding submit() promise never settles, which can cause graph.compute() to hang indefinitely. After iterating outcomes, remaining unresolved receivers should be rejected so each queued request always completes.
Useful? React with 👍 / 👎.
| ); | ||
| } | ||
|
|
||
| this.order = order; |
There was a problem hiding this comment.
Execute cyclic nodes when cyclic mode is freeze
With cyclic: "freeze", the constructor still assigns this.order directly from topoSort. For cyclic graphs, topoSort returns a truncated order (it sets hasCycle when not all nodes are ordered), so nodes in the cycle are dropped from this.order and never run in compute(), remaining perpetually pending instead of freezing to prior values. Freeze mode needs dedicated execution handling for cyclic nodes rather than using the truncated DAG order as-is.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
14 issues found across 45 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/tests/dag/topoSort.spec.ts">
<violation number="1" location="src/tests/dag/topoSort.spec.ts:123">
P2: This test asserts that duplicate input nodes produce `hasCycle: true`, but the graph has no cycle — only duplicated entries. The implementation reports a false positive because `order.length` (deduplicated) is compared against `nodes.length` (with duplicates). Encoding this as expected behavior makes the false positive look intentional and will block a future fix from landing cleanly. Consider either deduplicating `nodes` at the start of `topoSort` or adjusting the test expectation to reflect the absence of a real cycle.</violation>
</file>
<file name=".husky/pre-commit">
<violation number="1" location=".husky/pre-commit:2">
P1: Using `--write` with `--staged` in a pre-commit hook will auto-fix files on disk but won't re-stage them, so the commit still contains the unfixed code. Either drop `--write` so the hook just fails on violations (letting the developer fix and re-stage), or pipe the fixed file list back into `git add`.</violation>
</file>
<file name="src/tests/graph/introspection.spec.ts">
<violation number="1" location="src/tests/graph/introspection.spec.ts:76">
P2: This assertion is vacuously true: `" a"` is a substring of `" a --> b"`, so `toContain(" a")` passes even without the standalone line being emitted. Use a line-aware match such as `expect(mermaid).toMatch(/^ a$/m)` to verify the standalone node actually appears on its own line.</violation>
</file>
<file name="package.json">
<violation number="1" location="package.json:8">
P1: Missing `exports` field — ESM consumers in Node.js won't receive the native ESM build. Node.js ignores the `module` field; it only uses `exports` (or falls back to `main`). Without conditional exports, `import "data-compute"` in Node.js resolves to the CJS entry via `main`, not the ESM `./dist/index.mjs`. Add an `exports` map with `import`/`require` conditions and `types` entries for correct dual-package resolution.</violation>
</file>
<file name="src/graph/compute-graph.ts">
<violation number="1" location="src/graph/compute-graph.ts:169">
P2: `JSON.parse(JSON.stringify(baseState))` is a lossy deep clone — it silently corrupts `Date`, `Set`, `Map`, `undefined`, `NaN`, and `Infinity` values. Since `isPlainObject` already handles these types, consider using `structuredClone(baseState)` or a clone that respects the same types `deepMerge` preserves.</violation>
<violation number="2" location="src/graph/compute-graph.ts:241">
P1: Race condition: `setNodeReady`/`setNodeError` are called per-element inside concurrent `Promise.all` promises, so the final node status for the template path is non-deterministic. If one element errors and another succeeds, the resulting status depends on resolution order. Consider aggregating all element results after `Promise.all` and setting the template-path status once (e.g., "error" if any element failed, "ready" otherwise).</violation>
<violation number="3" location="src/graph/compute-graph.ts:368">
P2: `depValues` is always `{}` — the trace never records which dependency values each node read. Consider populating it from the node's tracked dependencies (e.g., `for (const dep of this.depsMap.get(path) ?? []) depValues[dep] = getByPath(state, dep);`).</violation>
</file>
<file name="ARCHITECTURE.md">
<violation number="1" location="ARCHITECTURE.md:46">
P2: `dryRunProxy` yields a 2-element iterator (used to capture array-element dependencies during static analysis), not an empty one. Saying "empty iterator" will mislead anyone relying on this doc to understand why dry-run array iteration works.</violation>
</file>
<file name=".github/dependabot.yml">
<violation number="1" location=".github/dependabot.yml:8">
P3: The `day` option is only effective when `interval` is `"weekly"`. With `"monthly"`, Dependabot ignores this setting, so it's misleading. Either remove `day` or change the interval to `"weekly"`.</violation>
</file>
<file name="CONTRIBUTING.md">
<violation number="1" location="CONTRIBUTING.md:27">
P2: Incorrect project description: this rebuilds the **library**, not an "MCP server". Likely a copy-paste from another project's CONTRIBUTING.md.</violation>
</file>
<file name=".github/workflows/publish.yml">
<violation number="1" location=".github/workflows/publish.yml:38">
P1: `NPM_TOKEN` is missing from the semantic-release env block. The default semantic-release pipeline includes `@semantic-release/npm`, which requires this secret to authenticate with the npm registry. Publishing will fail without it.</violation>
</file>
<file name="README.md">
<violation number="1" location="README.md:305">
P1: Async side effect inside a `setState` updater violates React's purity requirement. React may call updater functions more than once (e.g. StrictMode), which would trigger duplicate `graph.compute()` calls. Move the side effect out of the updater — for example, use a ref to access the latest state:
```ts
function useComputedState<T>(graph: Graph<T>, initial: T) {
const [state, setState] = useState<T | null>(null);
const stateRef = useRef(state);
stateRef.current = state;
useEffect(() => {
graph.compute(initial).then(setState);
}, []);
const update = useCallback((patch: Partial<T>) => {
const prev = stateRef.current;
if (!prev) return;
void graph.compute({ ...prev, ...patch }).then(setState);
}, [graph]);
return [state, update] as const;
}
```</violation>
</file>
<file name="src/batch/coordinator.ts">
<violation number="1" location="src/batch/coordinator.ts:109">
P2: `AbortController` is added to `this.controllers` but never removed after the query settles. Each flush leaks a completed controller. Add a `finally` block to clean it up.</violation>
<violation number="2" location="src/batch/coordinator.ts:116">
P1: If `config.query` returns outcomes that omit some batch entry IDs, the corresponding promises will hang forever. After the outcomes loop, reject any remaining unresolved entries to avoid silent promise leaks.</violation>
</file>
Since this is your first cubic review, here's how it works:
- cubic automatically reviews your code and comments on bugs and improvements
- Teach cubic by replying to its comments. cubic learns from your replies and gets better over time
- Add one-off context when rerunning by tagging
@cubic-dev-aiwith guidance or docs links (includingllms.txt) - Ask questions if you need clarification on any suggestion
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| @@ -0,0 +1,2 @@ | |||
| npm test | |||
| npx @biomejs/biome check --write --staged --no-errors-on-unmatched No newline at end of file | |||
There was a problem hiding this comment.
P1: Using --write with --staged in a pre-commit hook will auto-fix files on disk but won't re-stage them, so the commit still contains the unfixed code. Either drop --write so the hook just fails on violations (letting the developer fix and re-stage), or pipe the fixed file list back into git add.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .husky/pre-commit, line 2:
<comment>Using `--write` with `--staged` in a pre-commit hook will auto-fix files on disk but won't re-stage them, so the commit still contains the unfixed code. Either drop `--write` so the hook just fails on violations (letting the developer fix and re-stage), or pipe the fixed file list back into `git add`.</comment>
<file context>
@@ -0,0 +1,2 @@
+npm test
+npx @biomejs/biome check --write --staged --no-errors-on-unmatched
\ No newline at end of file
</file context>
| npx @biomejs/biome check --write --staged --no-errors-on-unmatched | |
| npx @biomejs/biome check --staged --no-errors-on-unmatched |
| "license": "ISC", | ||
| "type": "commonjs", | ||
| "main": "./dist/index.js", | ||
| "module": "./dist/index.mjs", |
There was a problem hiding this comment.
P1: Missing exports field — ESM consumers in Node.js won't receive the native ESM build. Node.js ignores the module field; it only uses exports (or falls back to main). Without conditional exports, import "data-compute" in Node.js resolves to the CJS entry via main, not the ESM ./dist/index.mjs. Add an exports map with import/require conditions and types entries for correct dual-package resolution.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At package.json, line 8:
<comment>Missing `exports` field — ESM consumers in Node.js won't receive the native ESM build. Node.js ignores the `module` field; it only uses `exports` (or falls back to `main`). Without conditional exports, `import "data-compute"` in Node.js resolves to the CJS entry via `main`, not the ESM `./dist/index.mjs`. Add an `exports` map with `import`/`require` conditions and `types` entries for correct dual-package resolution.</comment>
<file context>
@@ -0,0 +1,74 @@
+ "license": "ISC",
+ "type": "commonjs",
+ "main": "./dist/index.js",
+ "module": "./dist/index.mjs",
+ "types": "./dist/index.d.ts",
+ "files": [
</file context>
|
|
||
| setByPath(state, runtimePath, result); | ||
| setByPath(granularPatch, runtimePath, result); | ||
| this.setNodeReady(templatePath, result); |
There was a problem hiding this comment.
P1: Race condition: setNodeReady/setNodeError are called per-element inside concurrent Promise.all promises, so the final node status for the template path is non-deterministic. If one element errors and another succeeds, the resulting status depends on resolution order. Consider aggregating all element results after Promise.all and setting the template-path status once (e.g., "error" if any element failed, "ready" otherwise).
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/graph/compute-graph.ts, line 241:
<comment>Race condition: `setNodeReady`/`setNodeError` are called per-element inside concurrent `Promise.all` promises, so the final node status for the template path is non-deterministic. If one element errors and another succeeds, the resulting status depends on resolution order. Consider aggregating all element results after `Promise.all` and setting the template-path status once (e.g., "error" if any element failed, "ready" otherwise).</comment>
<file context>
@@ -0,0 +1,428 @@
+
+ setByPath(state, runtimePath, result);
+ setByPath(granularPatch, runtimePath, result);
+ this.setNodeReady(templatePath, result);
+ } catch (cause) {
+ this.setNodeError(templatePath, cause);
</file context>
|
|
||
| - name: Semantic Release | ||
| env: | ||
| GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} |
There was a problem hiding this comment.
P1: NPM_TOKEN is missing from the semantic-release env block. The default semantic-release pipeline includes @semantic-release/npm, which requires this secret to authenticate with the npm registry. Publishing will fail without it.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/publish.yml, line 38:
<comment>`NPM_TOKEN` is missing from the semantic-release env block. The default semantic-release pipeline includes `@semantic-release/npm`, which requires this secret to authenticate with the npm registry. Publishing will fail without it.</comment>
<file context>
@@ -0,0 +1,40 @@
+
+ - name: Semantic Release
+ env:
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ NPM_CONFIG_PROVENANCE: true
+ run: npx semantic-release
</file context>
| const update = useCallback((patch: Partial<T>) => { | ||
| setState((prev) => { | ||
| if (!prev) return prev; | ||
| void graph.compute({ ...prev, ...patch }).then(setState); |
There was a problem hiding this comment.
P1: Async side effect inside a setState updater violates React's purity requirement. React may call updater functions more than once (e.g. StrictMode), which would trigger duplicate graph.compute() calls. Move the side effect out of the updater — for example, use a ref to access the latest state:
function useComputedState<T>(graph: Graph<T>, initial: T) {
const [state, setState] = useState<T | null>(null);
const stateRef = useRef(state);
stateRef.current = state;
useEffect(() => {
graph.compute(initial).then(setState);
}, []);
const update = useCallback((patch: Partial<T>) => {
const prev = stateRef.current;
if (!prev) return;
void graph.compute({ ...prev, ...patch }).then(setState);
}, [graph]);
return [state, update] as const;
}Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At README.md, line 305:
<comment>Async side effect inside a `setState` updater violates React's purity requirement. React may call updater functions more than once (e.g. StrictMode), which would trigger duplicate `graph.compute()` calls. Move the side effect out of the updater — for example, use a ref to access the latest state:
```ts
function useComputedState<T>(graph: Graph<T>, initial: T) {
const [state, setState] = useState<T | null>(null);
const stateRef = useRef(state);
stateRef.current = state;
useEffect(() => {
graph.compute(initial).then(setState);
}, []);
const update = useCallback((patch: Partial<T>) => {
const prev = stateRef.current;
if (!prev) return;
void graph.compute({ ...prev, ...patch }).then(setState);
}, [graph]);
return [state, update] as const;
}
```</comment>
<file context>
@@ -0,0 +1,317 @@
+ const update = useCallback((patch: Partial<T>) => {
+ setState((prev) => {
+ if (!prev) return prev;
+ void graph.compute({ ...prev, ...patch }).then(setState);
+ return prev;
+ });
</file context>
| const baseState = this.options.getState?.() ?? granularPatch; | ||
|
|
||
| // Create the current full state to use for formulas | ||
| const state = deepMerge<Record<string, unknown>>( |
There was a problem hiding this comment.
P2: JSON.parse(JSON.stringify(baseState)) is a lossy deep clone — it silently corrupts Date, Set, Map, undefined, NaN, and Infinity values. Since isPlainObject already handles these types, consider using structuredClone(baseState) or a clone that respects the same types deepMerge preserves.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/graph/compute-graph.ts, line 169:
<comment>`JSON.parse(JSON.stringify(baseState))` is a lossy deep clone — it silently corrupts `Date`, `Set`, `Map`, `undefined`, `NaN`, and `Infinity` values. Since `isPlainObject` already handles these types, consider using `structuredClone(baseState)` or a clone that respects the same types `deepMerge` preserves.</comment>
<file context>
@@ -0,0 +1,428 @@
+ const baseState = this.options.getState?.() ?? granularPatch;
+
+ // Create the current full state to use for formulas
+ const state = deepMerge<Record<string, unknown>>(
+ JSON.parse(JSON.stringify(baseState)),
+ granularPatch,
</file context>
| **File:** [src/tracking/proxy.ts](src/tracking/proxy.ts) | ||
|
|
||
| - **`trackingProxy(target, deps)`** — Wraps real runtime data in a recursive Proxy. Every property access is recorded into `deps` as dotted paths (e.g. `pricing.taxRate`). Used by `trace()` and for runtime dependency tracking. | ||
| - **`dryRunProxy(deps)`** — Phantom proxy for the static dependency extraction pass. No real data is needed; returns safe defaults (`0`, `""`, `true`, empty iterator) so formula bodies can execute without throwing. Records all accessed paths. Async formulas throw when awaited, but synchronous accesses before that point are captured. |
There was a problem hiding this comment.
P2: dryRunProxy yields a 2-element iterator (used to capture array-element dependencies during static analysis), not an empty one. Saying "empty iterator" will mislead anyone relying on this doc to understand why dry-run array iteration works.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At ARCHITECTURE.md, line 46:
<comment>`dryRunProxy` yields a 2-element iterator (used to capture array-element dependencies during static analysis), not an empty one. Saying "empty iterator" will mislead anyone relying on this doc to understand why dry-run array iteration works.</comment>
<file context>
@@ -0,0 +1,152 @@
+**File:** [src/tracking/proxy.ts](src/tracking/proxy.ts)
+
+- **`trackingProxy(target, deps)`** — Wraps real runtime data in a recursive Proxy. Every property access is recorded into `deps` as dotted paths (e.g. `pricing.taxRate`). Used by `trace()` and for runtime dependency tracking.
+- **`dryRunProxy(deps)`** — Phantom proxy for the static dependency extraction pass. No real data is needed; returns safe defaults (`0`, `""`, `true`, empty iterator) so formula bodies can execute without throwing. Records all accessed paths. Async formulas throw when awaited, but synchronous accesses before that point are captured.
+- **Short-circuit array methods** — `find`, `findIndex`, `findLast`, `findLastIndex`, `some`, `every` are wrapped to call `touchArrayElements` first (records every element), then re-run the native method on a proxied copy. Ensures accurate dependency tracking even when the native method would stop early.
+
</file context>
| ```bash | ||
| npm run dev | ||
| ``` | ||
| *This uses `tsup` to instantly rebuild the MCP server whenever you save a file.* |
There was a problem hiding this comment.
P2: Incorrect project description: this rebuilds the library, not an "MCP server". Likely a copy-paste from another project's CONTRIBUTING.md.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At CONTRIBUTING.md, line 27:
<comment>Incorrect project description: this rebuilds the **library**, not an "MCP server". Likely a copy-paste from another project's CONTRIBUTING.md.</comment>
<file context>
@@ -0,0 +1,51 @@
+ ```bash
+ npm run dev
+ ```
+ *This uses `tsup` to instantly rebuild the MCP server whenever you save a file.*
+
+## 🎨 Code Style (Zero Config!)
</file context>
| } | ||
| } | ||
|
|
||
| const controller = new AbortController(); |
There was a problem hiding this comment.
P2: AbortController is added to this.controllers but never removed after the query settles. Each flush leaks a completed controller. Add a finally block to clean it up.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/batch/coordinator.ts, line 109:
<comment>`AbortController` is added to `this.controllers` but never removed after the query settles. Each flush leaks a completed controller. Add a `finally` block to clean it up.</comment>
<file context>
@@ -0,0 +1,130 @@
+ }
+ }
+
+ const controller = new AbortController();
+ this.controllers.add(controller);
+
</file context>
| directory: "/" | ||
| schedule: | ||
| interval: "monthly" | ||
| day: "monday" |
There was a problem hiding this comment.
P3: The day option is only effective when interval is "weekly". With "monthly", Dependabot ignores this setting, so it's misleading. Either remove day or change the interval to "weekly".
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/dependabot.yml, line 8:
<comment>The `day` option is only effective when `interval` is `"weekly"`. With `"monthly"`, Dependabot ignores this setting, so it's misleading. Either remove `day` or change the interval to `"weekly"`.</comment>
<file context>
@@ -0,0 +1,21 @@
+ directory: "/"
+ schedule:
+ interval: "monthly"
+ day: "monday"
+ time: "06:00"
+ timezone: "UTC"
</file context>
No description provided.