diff --git a/.agent/contracts/node-bridge.md b/.agent/contracts/node-bridge.md index 73772617..469924fa 100644 --- a/.agent/contracts/node-bridge.md +++ b/.agent/contracts/node-bridge.md @@ -94,6 +94,44 @@ Bridge-provided randomness for global `crypto` APIs MUST delegate to host `node: - **WHEN** host `node:crypto` randomness primitives are unavailable or fail - **THEN** the bridge MUST throw a deterministic error matching the unsupported API format (`". is not supported in sandbox"`) for the invoked randomness API and MUST NOT fall back to non-cryptographic randomness +### Requirement: Global WebCrypto Surface Matches The `crypto.webcrypto` Bridge +The bridge SHALL expose a single WebCrypto surface so global `crypto` APIs and `require('crypto').webcrypto` share the same object graph and constructor semantics. + +#### Scenario: Sandboxed code compares global and module WebCrypto objects +- **WHEN** sandboxed code reads both `globalThis.crypto` and `require('crypto').webcrypto` +- **THEN** those references MUST point at the same WebCrypto object +- **AND** `crypto.subtle` MUST expose the same `SubtleCrypto` instance through both paths + +#### Scenario: WebCrypto constructors stay non-user-constructible +- **WHEN** sandboxed code calls `new Crypto()`, `new SubtleCrypto()`, or `new CryptoKey()` +- **THEN** the bridge MUST throw a Node-compatible illegal-constructor `TypeError` +- **AND** prototype method receiver validation MUST reject detached calls with `ERR_INVALID_THIS` + +### Requirement: Diffie-Hellman And ECDH Bridge Uses Host Node Crypto Objects +Bridge-provided `crypto` Diffie-Hellman and ECDH APIs SHALL delegate to host `node:crypto` objects so constructor validation, session state, encodings, and shared-secret derivation match Node.js semantics. + +#### Scenario: Sandbox creates a Diffie-Hellman session +- **WHEN** sandboxed code calls `crypto.createDiffieHellman(...)`, `crypto.getDiffieHellman(...)`, or `crypto.createECDH(...)` +- **THEN** the bridge MUST construct the corresponding host `node:crypto` object +- **AND** subsequent method calls such as `generateKeys()`, `computeSecret()`, `getPublicKey()`, and `setPrivateKey()` MUST execute against that host object rather than an isolate-local reimplementation + +#### Scenario: Sandbox uses stateless crypto.diffieHellman +- **WHEN** sandboxed code calls `crypto.diffieHellman({ privateKey, publicKey })` +- **THEN** the bridge MUST delegate to host `node:crypto.diffieHellman` +- **AND** the returned shared secret and thrown validation errors MUST preserve Node-compatible behavior + +### Requirement: Crypto Stream Wrappers Preserve Transform Semantics And Validation Errors +Bridge-backed `crypto` hash and cipher wrappers SHALL remain compatible with Node stream semantics and MUST preserve Node-style validation error codes for callback-driven APIs. + +#### Scenario: Sandbox hashes or encrypts data through stream piping +- **WHEN** sandboxed code uses `crypto.Hash`, `crypto.Cipheriv`, or `crypto.Decipheriv` as stream destinations or sources +- **THEN** those objects MUST be `stream.Transform` instances +- **AND** piping data through them MUST emit the same digest or ciphertext/plaintext bytes that the corresponding direct `update()`/`final()` calls would produce + +#### Scenario: Sandbox calls pbkdf2 with invalid arguments +- **WHEN** sandboxed code calls `crypto.pbkdf2()` or `crypto.pbkdf2Sync()` with invalid callback, digest, password, salt, iteration, or key length arguments +- **THEN** the bridge MUST throw or surface Node-compatible `ERR_INVALID_ARG_TYPE` / `ERR_OUT_OF_RANGE` errors instead of plain untyped exceptions + ### Requirement: Bridge FS Open Flag Translation Uses Named Constants The bridge `fs` implementation MUST express string-flag translation using named open-flag constants (for example `O_WRONLY | O_CREAT | O_TRUNC`) aligned with Node `fs.constants` semantics, and MUST NOT rely on undocumented numeric literals. @@ -160,3 +198,29 @@ The bridge global key registry consumed by host runtime setup, bridge modules, a #### Scenario: Native V8 bridge registries stay aligned with async and sync lifecycle hooks - **WHEN** bridge modules depend on a host bridge global via async `.apply(..., { result: { promise: true } })` or sync `.applySync(...)` semantics - **THEN** the native V8 bridge function registries MUST expose a matching callable shape for that global (or an equivalent tested shim), and automated verification MUST cover the registry alignment + +### Requirement: Dispatch-Multiplexed Bridge Errors Preserve Structured Metadata +Bridge globals routed through the `_loadPolyfill` dispatch multiplexer SHALL preserve host error metadata needed for Node-compatible assertions. + +#### Scenario: Host bridge throws typed crypto validation error +- **WHEN** a dispatch-multiplexed bridge handler throws a host error with `name` and `code` (for example `TypeError` + `ERR_INVALID_ARG_VALUE`) +- **THEN** the sandbox-visible error MUST preserve that `name` and `code` +- **AND** the bridge MUST NOT collapse the error to a plain `Error` with only a message + +### Requirement: HTTP Agent Bridge Preserves Node Pooling Semantics +Bridge-provided `http.Agent` behavior SHALL preserve the observable pooling state that Node.js userland and conformance tests inspect. + +#### Scenario: Sandboxed code inspects agent bookkeeping +- **WHEN** sandboxed code uses `http.Agent` or `require('_http_agent').Agent` +- **THEN** the bridge MUST expose matching `Agent` constructors through both module paths +- **AND** `getName()`, `requests`, `sockets`, `freeSockets`, and `totalSocketCount` MUST reflect request queueing and socket reuse state with Node-compatible key shapes + +#### Scenario: Keepalive sockets are reused or discarded +- **WHEN** sandboxed code enables `keepAlive` and reuses pooled HTTP connections +- **THEN** the bridge MUST mark reused requests via `request.reusedSocket` +- **AND** destroyed or remotely closed sockets MUST be removed from the pool instead of being reassigned to queued requests + +#### Scenario: Total socket limits are configured +- **WHEN** sandboxed code constructs an `http.Agent` with `maxSockets`, `maxFreeSockets`, or `maxTotalSockets` +- **THEN** invalid argument types and ranges MUST throw Node-compatible `ERR_INVALID_ARG_TYPE` / `ERR_OUT_OF_RANGE` errors +- **AND** queued requests across origins MUST respect both per-origin and total socket limits diff --git a/CLAUDE.md b/CLAUDE.md index 700de46f..19c9d0fc 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -50,6 +50,7 @@ - use pnpm, vitest, and tsc for type checks - use turbo for builds +- after changing `packages/core/isolate-runtime/src/inject/require-setup.ts` or Node bridge code that regenerates the isolate bundle, rebuild in this order: `pnpm --filter @secure-exec/nodejs build` then `pnpm --filter @secure-exec/core build`; the conformance runner executes built `dist` output, not just source files - keep timeouts under 1 minute and avoid running full test suites unless necessary - use one-line Conventional Commit messages; never add any co-authors (including agents) - never mark work complete until typechecks pass and all tests pass in the current turn; if they fail, report the failing command and first concrete error @@ -198,6 +199,11 @@ Follow the style in `packages/secure-exec/src/index.ts`. ## Documentation +- all public-facing docs (quickstart, guides, API reference, landing page, README) must focus on the **Node.js runtime** as the primary and default experience — do not lead with WasmVM, kernel internals, or multi-runtime concepts +- code examples in docs should use the `NodeRuntime` API (`runtime.run()`, `runtime.exec()`) as the default path; the kernel API (`createKernel`, `kernel.spawn()`) is for advanced multi-process use cases and should be presented as secondary +- keep documentation pages and their runnable example sources in sync: `docs/quickstart.mdx` must match `examples/kitchen-sink/src/`, and `docs/features/*.mdx` must match `examples/features/src/` +- when updating a doc snippet, update the corresponding example file and the docs/example verification scripts in the same change +- when converting runnable example code into documentation snippets, use public package imports like `from "secure-exec"` and `from "@secure-exec/typescript"` instead of repo-local source paths - WasmVM and Python docs are experimental docs and must stay grouped under the `Experimental` section in `docs/docs.json` - docs pages that must stay current with API changes: - `docs/quickstart.mdx` — update when core setup flow changes diff --git a/docs-internal/arch/overview.md b/docs-internal/arch/overview.md index 265afaea..c0353cf2 100644 --- a/docs-internal/arch/overview.md +++ b/docs-internal/arch/overview.md @@ -1,5 +1,133 @@ # Architecture Overview +## Architectural Model: Inverted VM + +Traditional virtual machines (Firecracker, QEMU) place the OS **inside** the VM — a hypervisor +virtualizes hardware, and a guest kernel (Linux) runs on top: + +``` +Traditional: VM contains OS + + ┌────────────────────┐ + │ Hypervisor / VM │ (Firecracker, QEMU, KVM) + │ │ + │ ┌──────────────┐ │ + │ │ Guest OS │ │ (Linux kernel) + │ │ │ │ + │ │ ┌────────┐ │ │ + │ │ │ Apps │ │ │ (ELF binaries) + │ │ └────────┘ │ │ + │ └──────────────┘ │ + └────────────────────┘ +``` + +Secure Exec inverts this: the OS is the **outer** layer, and execution engines (V8, WASM) are +plugged **into** it. The kernel runs in the host process and mediates all I/O. There is no +hypervisor — the isolation boundary is the V8 isolate and WASM sandbox, not hardware virtualization: + +``` +Secure Exec: OS contains VMs + + ┌──────────────────────────────────────────────┐ + │ Virtual OS (packages/core/kernel/) │ + │ VFS, process table, FD table, │ + │ sockets, pipes, signals, permissions │ + │ │ + │ ┌─────────────────┐ ┌───────────────────┐ │ + │ │ V8 Isolate │ │ WASM Runtime │ │ + │ │ (Node.js) │ │ (V8 WebAssembly)│ │ + │ │ │ │ │ │ + │ │ JS scripts │ │ POSIX binaries │ │ + │ └─────────────────┘ └───────────────────┘ │ + └──────────────────────────────────────────────┘ +``` + +### Comparison: Containers vs MicroVMs vs Secure Exec + +``` +Container (Docker) + + ┌───────────────────────────────────────────┐ + │ Host Linux Kernel (shared) │ + │ │ + │ ┌─────────────────┐ ┌────────────────┐ │ + │ │ Namespace + │ │ Namespace + │ │ + │ │ cgroup jail │ │ cgroup jail │ │ + │ │ │ │ │ │ + │ │ ┌───────────┐ │ │ ┌──────────┐ │ │ + │ │ │ App 1 │ │ │ │ App 2 │ │ │ + │ │ │ ELF bins │ │ │ │ ELF bins │ │ │ + │ │ └───────────┘ │ │ └──────────┘ │ │ + │ └─────────────────┘ └────────────────┘ │ + └───────────────────────────────────────────┘ + Kernel is shared. Kernel vuln = all containers escape. + + +MicroVM (Firecracker) + + ┌───────────────────────────────────────────┐ + │ Host Linux Kernel │ + │ │ + │ ┌─────────────────┐ ┌────────────────┐ │ + │ │ KVM / VT-x │ │ KVM / VT-x │ │ + │ │ (hypervisor) │ │ (hypervisor) │ │ + │ │ │ │ │ │ + │ │ ┌────────────┐ │ │ ┌──────────┐ │ │ + │ │ │ Guest │ │ │ │ Guest │ │ │ + │ │ │ Linux │ │ │ │ Linux │ │ │ + │ │ │ Kernel │ │ │ │ Kernel │ │ │ + │ │ │ │ │ │ │ │ │ │ + │ │ │ ┌──────┐ │ │ │ │ ┌──────┐ │ │ │ + │ │ │ │ App │ │ │ │ │ │ App │ │ │ │ + │ │ │ └──────┘ │ │ │ │ └──────┘ │ │ │ + │ │ └────────────┘ │ │ └──────────┘ │ │ + │ └─────────────────┘ └────────────────┘ │ + └───────────────────────────────────────────┘ + Each VM has its own kernel. Hypervisor vuln = escape. + + +Secure Exec + + ┌───────────────────────────────────────────┐ + │ Host Process (Node.js / Browser) │ + │ │ + │ ┌─────────────────┐ ┌────────────────┐ │ + │ │ Virtual OS │ │ Virtual OS │ │ + │ │ (SEOS kernel) │ │ (SEOS kernel) │ │ + │ │ │ │ │ │ + │ │ ┌────────────┐ │ │ ┌──────────┐ │ │ + │ │ │ V8 / WASM │ │ │ │ V8 / WASM│ │ │ + │ │ │ │ │ │ │ │ │ │ + │ │ │ JS / WASM │ │ │ │ JS / WASM│ │ │ + │ │ │ programs │ │ │ │ programs │ │ │ + │ │ └────────────┘ │ │ └──────────┘ │ │ + │ └─────────────────┘ └────────────────┘ │ + └───────────────────────────────────────────┘ + Each instance has its own kernel. V8/WASM vuln = escape. +``` + +| | Container | MicroVM | Secure Exec | +|--------------------|----------------------|-----------------------|-----------------------| +| **Isolation** | Namespaces + cgroups | Hardware (VT-x/KVM) | V8 isolate + WASM | +| **Kernel** | Shared host kernel | Dedicated guest kernel| Virtual POSIX kernel | +| **Attack surface** | Host kernel syscalls | Hypervisor interface | JS/WASM sandbox | +| **Boot time** | ~100ms | ~125ms | <5ms | +| **Overhead** | Near-native | ~3-5% CPU/memory | V8/WASM overhead | +| **Runs in browser**| No | No | Yes | +| **Guest format** | ELF binaries | ELF binaries | JS scripts + WASM | +| **Escape risk** | Kernel vuln = escape | Hypervisor vuln = escape | V8 vuln = escape | + +Key architectural differences: +- **Containers** share the host kernel — a kernel vulnerability lets every container escape. The kernel is the trust boundary and the attack surface simultaneously. +- **MicroVMs** run a dedicated guest kernel inside hardware virtualization. Stronger isolation (hypervisor boundary), but 100ms+ boot time and no browser support. +- **Secure Exec** provides its own virtual kernel in userspace. No shared kernel attack surface (the virtual kernel is per-instance), no hardware requirements, millisecond boot. The tradeoff is that isolation depends on V8/WASM sandbox correctness rather than hardware enforcement. + +The WASI extensions (`native/wasmvm/crates/wasi-ext/`) bridge WASM syscalls into the OS kernel. +The Node.js bridge (`packages/nodejs/src/bridge/`) does the same for V8 isolate code. Both are +thin translation layers — the real implementation lives in the kernel. + +## Package Map + ``` Kernel-first API (createKernel + mount + exec) packages/core/ diff --git a/docs-internal/todo.md b/docs-internal/todo.md index fecdf447..a9c52494 100644 --- a/docs-internal/todo.md +++ b/docs-internal/todo.md @@ -12,6 +12,10 @@ Priority order is: --- +## Rename + +- [ ] Rename `wasmvm` back to `seos` (Secure Exec OS) across the codebase (packages, directories, imports, docs) + docs-internal/proposal-kernel-consolidation.md docs-internal/specs/custom-bindings.md docs-internal/specs/cli-tool-e2e.md diff --git a/docs/api-reference.mdx b/docs/api-reference.mdx index 24ffda11..264f042e 100644 --- a/docs/api-reference.mdx +++ b/docs/api-reference.mdx @@ -136,7 +136,7 @@ createTypeScriptTools(options: TypeScriptToolsOptions) | `runtimeDriverFactory` | `NodeRuntimeDriverFactory` | Creates the compiler sandbox runtime. | | `memoryLimit` | `number` | Compiler sandbox isolate memory cap in MB. Default `512`. | | `cpuTimeLimitMs` | `number` | Compiler sandbox CPU time budget in ms. | -| `compilerSpecifier` | `string` | Module specifier used to load the TypeScript compiler. Default `"/root/node_modules/typescript/lib/typescript.js"`. | +| `compilerSpecifier` | `string` | Module specifier used to load the TypeScript compiler. Default `"typescript"`. | **Methods** diff --git a/docs/features/child-processes.mdx b/docs/features/child-processes.mdx index 810472cf..d93bac13 100644 --- a/docs/features/child-processes.mdx +++ b/docs/features/child-processes.mdx @@ -12,14 +12,16 @@ Sandboxed code can spawn child processes through the `CommandExecutor` interface ## Runnable example +Source file: `examples/features/src/child-processes.ts` + ```ts import { NodeRuntime, allowAllChildProcess, createNodeDriver, createNodeRuntimeDriverFactory, -} from "../../../packages/secure-exec/src/index.ts"; -import type { CommandExecutor } from "../../../packages/secure-exec/src/types.ts"; +} from "secure-exec"; +import type { CommandExecutor } from "secure-exec"; import { spawn } from "node:child_process"; const commandExecutor: CommandExecutor = { @@ -98,8 +100,6 @@ try { } ``` -Source: [examples/features/src/child-processes.ts](https://github.com/rivet-dev/secure-exec/blob/main/examples/features/src/child-processes.ts) - ## Permission gating Restrict which commands sandboxed code can spawn: diff --git a/docs/features/filesystem.mdx b/docs/features/filesystem.mdx index f4236f2b..88bff587 100644 --- a/docs/features/filesystem.mdx +++ b/docs/features/filesystem.mdx @@ -12,6 +12,8 @@ secure-exec supports three filesystem backends. The system driver controls which ## Runnable example +Source file: `examples/features/src/filesystem.ts` + ```ts import { NodeRuntime, @@ -19,7 +21,7 @@ import { createInMemoryFileSystem, createNodeDriver, createNodeRuntimeDriverFactory, -} from "../../../packages/secure-exec/src/index.ts"; +} from "secure-exec"; const filesystem = createInMemoryFileSystem(); const runtime = new NodeRuntime({ @@ -55,8 +57,6 @@ try { } ``` -Source: [examples/features/src/filesystem.ts](https://github.com/rivet-dev/secure-exec/blob/main/examples/features/src/filesystem.ts) - ## OPFS (browser) Persistent filesystem using the Origin Private File System API. This is the default for `createBrowserDriver()`. diff --git a/docs/features/module-loading.mdx b/docs/features/module-loading.mdx index dd471a6d..ab3f6359 100644 --- a/docs/features/module-loading.mdx +++ b/docs/features/module-loading.mdx @@ -13,6 +13,8 @@ Sandboxed code can `require()` and `import` modules through secure-exec's module ## Runnable example +Source file: `examples/features/src/module-loading.ts` + ```ts import path from "node:path"; import { fileURLToPath } from "node:url"; @@ -21,7 +23,7 @@ import { allowAllFs, createNodeDriver, createNodeRuntimeDriverFactory, -} from "../../../packages/secure-exec/src/index.ts"; +} from "secure-exec"; const repoRoot = path.resolve(path.dirname(fileURLToPath(import.meta.url)), "../../.."); @@ -58,8 +60,6 @@ try { } ``` -Source: [examples/features/src/module-loading.ts](https://github.com/rivet-dev/secure-exec/blob/main/examples/features/src/module-loading.ts) - ## node_modules overlay Node runtime executions expose a read-only dependency overlay at `/app/node_modules`, sourced from `/node_modules` on the host (default `cwd` is `process.cwd()`). diff --git a/docs/features/networking.mdx b/docs/features/networking.mdx index 58377006..d5873e67 100644 --- a/docs/features/networking.mdx +++ b/docs/features/networking.mdx @@ -12,6 +12,8 @@ Network access is deny-by-default. Enable it by setting `useDefaultNetwork: true ## Runnable example +Source file: `examples/features/src/networking.ts` + ```ts import * as http from "node:http"; import { @@ -20,7 +22,7 @@ import { createDefaultNetworkAdapter, createNodeDriver, createNodeRuntimeDriverFactory, -} from "../../../packages/secure-exec/src/index.ts"; +} from "secure-exec"; const logs: string[] = []; const server = http.createServer((_req, res) => { @@ -51,23 +53,19 @@ const runtime = new NodeRuntime({ try { const result = await runtime.exec( ` - (async () => { - const response = await fetch("http://127.0.0.1:${address.port}/"); - const body = await response.text(); - - if (!response.ok || response.status !== 200 || body !== "network-ok") { - throw new Error( - "unexpected response: " + response.status + " " + body, - ); - } - - console.log(JSON.stringify({ status: response.status, body })); - })().catch((error) => { - console.error(error instanceof Error ? error.message : String(error)); - process.exitCode = 1; - }); + const response = await fetch("http://127.0.0.1:${address.port}/"); + const body = await response.text(); + + if (!response.ok || response.status !== 200 || body !== "network-ok") { + throw new Error( + "unexpected response: " + response.status + " " + body, + ); + } + + console.log(JSON.stringify({ status: response.status, body })); `, { + filePath: "/entry.mjs", onStdio: (event) => { logs.push(`[${event.channel}] ${event.message}`); }, @@ -107,8 +105,6 @@ try { } ``` -Source: [examples/features/src/networking.ts](https://github.com/rivet-dev/secure-exec/blob/main/examples/features/src/networking.ts) - ## Quick setup diff --git a/docs/features/output-capture.mdx b/docs/features/output-capture.mdx index b59c7e1d..4c9702f9 100644 --- a/docs/features/output-capture.mdx +++ b/docs/features/output-capture.mdx @@ -12,12 +12,14 @@ Console output from sandboxed code is **not buffered** into result fields. `exec ## Runnable example +Source file: `examples/features/src/output-capture.ts` + ```ts import { NodeRuntime, createNodeDriver, createNodeRuntimeDriverFactory, -} from "../../../packages/secure-exec/src/index.ts"; +} from "secure-exec"; const events: string[] = []; @@ -64,8 +66,6 @@ try { } ``` -Source: [examples/features/src/output-capture.ts](https://github.com/rivet-dev/secure-exec/blob/main/examples/features/src/output-capture.ts) - ## Default hook Set a runtime-level hook that applies to all executions: diff --git a/docs/features/permissions.mdx b/docs/features/permissions.mdx index b78df8d9..1f215c96 100644 --- a/docs/features/permissions.mdx +++ b/docs/features/permissions.mdx @@ -12,13 +12,15 @@ All host capabilities are **deny-by-default**. Sandboxed code cannot access the ## Runnable example +Source file: `examples/features/src/permissions.ts` + ```ts import { NodeRuntime, createInMemoryFileSystem, createNodeDriver, createNodeRuntimeDriverFactory, -} from "../../../packages/secure-exec/src/index.ts"; +} from "secure-exec"; const filesystem = createInMemoryFileSystem(); await filesystem.writeFile("/secret.txt", "top secret"); @@ -69,8 +71,6 @@ console.log( ); ``` -Source: [examples/features/src/permissions.ts](https://github.com/rivet-dev/secure-exec/blob/main/examples/features/src/permissions.ts) - ## Permission helpers Quick presets for common configurations: diff --git a/docs/features/resource-limits.mdx b/docs/features/resource-limits.mdx index 911d5cbf..b6b7dfe9 100644 --- a/docs/features/resource-limits.mdx +++ b/docs/features/resource-limits.mdx @@ -12,12 +12,14 @@ Resource limits prevent sandboxed code from running forever or exhausting host m ## Runnable example +Source file: `examples/features/src/resource-limits.ts` + ```ts import { NodeRuntime, createNodeDriver, createNodeRuntimeDriverFactory, -} from "../../../packages/secure-exec/src/index.ts"; +} from "secure-exec"; const runtime = new NodeRuntime({ systemDriver: createNodeDriver(), @@ -52,8 +54,6 @@ try { } ``` -Source: [examples/features/src/resource-limits.ts](https://github.com/rivet-dev/secure-exec/blob/main/examples/features/src/resource-limits.ts) - ## CPU time limit Set a CPU time budget in milliseconds. When exceeded, the execution exits with code `124`. diff --git a/docs/features/typescript.mdx b/docs/features/typescript.mdx index d0211446..f64fe3ed 100644 --- a/docs/features/typescript.mdx +++ b/docs/features/typescript.mdx @@ -12,14 +12,16 @@ The `@secure-exec/typescript` companion package runs the TypeScript compiler ins ## Runnable example +Source file: `examples/features/src/typescript.ts` + ```ts import { NodeRuntime, allowAllFs, createNodeDriver, createNodeRuntimeDriverFactory, -} from "../../../packages/secure-exec/src/index.ts"; -import { createTypeScriptTools } from "../../../packages/typescript/src/index.ts"; +} from "secure-exec"; +import { createTypeScriptTools } from "@secure-exec/typescript"; const sourceText = ` export const message: string = "hello from typescript"; @@ -42,7 +44,7 @@ const runtime = new NodeRuntime({ const ts = createTypeScriptTools({ systemDriver: compilerSystemDriver, runtimeDriverFactory, - compilerSpecifier: "/root/node_modules/typescript/lib/typescript.js", + compilerSpecifier: "typescript", }); try { @@ -91,8 +93,6 @@ try { } ``` -Source: [examples/features/src/typescript.ts](https://github.com/rivet-dev/secure-exec/blob/main/examples/features/src/typescript.ts) - ## Install ```bash @@ -119,7 +119,7 @@ const ts = createTypeScriptTools({ | `runtimeDriverFactory` | `NodeRuntimeDriverFactory` | required | Creates the compiler sandbox | | `memoryLimit` | `number` | `512` | Compiler isolate memory cap in MB | | `cpuTimeLimitMs` | `number` | | Compiler CPU time budget in ms | -| `compilerSpecifier` | `string` | `"/root/node_modules/typescript/lib/typescript.js"` | Module specifier for the TypeScript compiler | +| `compilerSpecifier` | `string` | `"typescript"` | Module specifier for the TypeScript compiler | ## Type-check a source string diff --git a/docs/features/virtual-filesystem.mdx b/docs/features/virtual-filesystem.mdx index c30b18ed..80539228 100644 --- a/docs/features/virtual-filesystem.mdx +++ b/docs/features/virtual-filesystem.mdx @@ -4,45 +4,18 @@ description: Implement your own VirtualFileSystem to control how sandboxed code icon: "hard-drive" --- -You can create a custom `VirtualFileSystem` to back the sandbox with any storage layer — a database, S3, a zip archive, or anything else. Sandboxed code uses `fs`, `require`, and other Node APIs as normal, and your implementation handles the actual I/O. - -## The interface - -Your class must implement `VirtualFileSystem` from `secure-exec-core`: - -```ts -import type { VirtualFileSystem, VirtualStat, VirtualDirEntry } from "secure-exec"; + + Runnable example for a custom virtual filesystem. + -class MyFileSystem implements VirtualFileSystem { - async readFile(path: string): Promise { /* ... */ } - async readTextFile(path: string): Promise { /* ... */ } - async writeFile(path: string, content: string | Uint8Array): Promise { /* ... */ } - async readDir(path: string): Promise { /* ... */ } - async readDirWithTypes(path: string): Promise { /* ... */ } - async createDir(path: string): Promise { /* ... */ } - async mkdir(path: string): Promise { /* ... */ } - async exists(path: string): Promise { /* ... */ } - async stat(path: string): Promise { /* ... */ } - async removeFile(path: string): Promise { /* ... */ } - async removeDir(path: string): Promise { /* ... */ } - async rename(oldPath: string, newPath: string): Promise { /* ... */ } - async symlink(target: string, linkPath: string): Promise { /* ... */ } - async readlink(path: string): Promise { /* ... */ } - async lstat(path: string): Promise { /* ... */ } - async link(oldPath: string, newPath: string): Promise { /* ... */ } - async chmod(path: string, mode: number): Promise { /* ... */ } - async chown(path: string, uid: number, gid: number): Promise { /* ... */ } - async utimes(path: string, atime: number, mtime: number): Promise { /* ... */ } - async truncate(path: string, length: number): Promise { /* ... */ } -} -``` +You can create a custom `VirtualFileSystem` to back the sandbox with any storage layer — a database, S3, a zip archive, or anything else. Sandboxed code uses `fs`, `require`, and other Node APIs as normal, and your implementation handles the actual I/O. -## Example: read-only Map filesystem +## Runnable example -A minimal filesystem backed by a `Map`. Useful when you have a fixed set of files (e.g. loaded from a database) and want to make them available to sandboxed code. +Source file: `examples/features/src/virtual-filesystem.ts` ```ts -import type { VirtualFileSystem, VirtualStat, VirtualDirEntry } from "secure-exec"; +import type { DirEntry, StatInfo, VirtualFileSystem } from "secure-exec"; import { NodeRuntime, allowAllFs, @@ -69,39 +42,13 @@ class ReadOnlyMapFS implements VirtualFileSystem { return content; } - async exists(path: string) { - return this.files.has(path) || this.#isDir(path); - } - - async stat(path: string): Promise { - const now = Date.now(); - if (this.files.has(path)) { - return { - mode: 0o444, - size: new TextEncoder().encode(this.files.get(path)!).byteLength, - isDirectory: false, - atimeMs: now, mtimeMs: now, ctimeMs: now, birthtimeMs: now, - }; - } - if (this.#isDir(path)) { - return { - mode: 0o555, - size: 0, - isDirectory: true, - atimeMs: now, mtimeMs: now, ctimeMs: now, birthtimeMs: now, - }; - } - throw new Error(`ENOENT: ${path}`); - } - - async lstat(path: string) { return this.stat(path); } - async readDir(path: string) { const prefix = path === "/" ? "/" : path + "/"; const entries = new Set(); for (const key of this.files.keys()) { - if (key.startsWith(prefix)) { - const rest = key.slice(prefix.length); + if (!key.startsWith(prefix)) continue; + const rest = key.slice(prefix.length); + if (rest.length > 0) { entries.add(rest.split("/")[0]); } } @@ -109,29 +56,77 @@ class ReadOnlyMapFS implements VirtualFileSystem { return [...entries]; } - async readDirWithTypes(path: string): Promise { + async readDirWithTypes(path: string): Promise { const names = await this.readDir(path); const prefix = path === "/" ? "/" : path + "/"; return names.map((name) => ({ name, isDirectory: this.#isDir(prefix + name), + isSymbolicLink: false, })); } - // Write operations throw — this filesystem is read-only async writeFile() { throw new Error("EROFS: read-only filesystem"); } async createDir() { throw new Error("EROFS: read-only filesystem"); } async mkdir() { throw new Error("EROFS: read-only filesystem"); } + + async exists(path: string) { + return this.files.has(path) || this.#isDir(path); + } + + async stat(path: string): Promise { + const now = Date.now(); + if (this.files.has(path)) { + return { + mode: 0o444, + size: new TextEncoder().encode(this.files.get(path) ?? "").byteLength, + isDirectory: false, + isSymbolicLink: false, + atimeMs: now, + mtimeMs: now, + ctimeMs: now, + birthtimeMs: now, + ino: 1, + nlink: 1, + uid: 0, + gid: 0, + }; + } + if (this.#isDir(path)) { + return { + mode: 0o555, + size: 0, + isDirectory: true, + isSymbolicLink: false, + atimeMs: now, + mtimeMs: now, + ctimeMs: now, + birthtimeMs: now, + ino: 1, + nlink: 1, + uid: 0, + gid: 0, + }; + } + throw new Error(`ENOENT: ${path}`); + } + async removeFile() { throw new Error("EROFS: read-only filesystem"); } async removeDir() { throw new Error("EROFS: read-only filesystem"); } async rename() { throw new Error("EROFS: read-only filesystem"); } + async realpath(path: string) { return path; } async symlink() { throw new Error("EROFS: read-only filesystem"); } - async readlink() { throw new Error("ENOSYS: no symlinks"); } + async readlink(_path: string): Promise { throw new Error("ENOSYS: no symlinks"); } + async lstat(path: string) { return this.stat(path); } async link() { throw new Error("EROFS: read-only filesystem"); } async chmod() { throw new Error("EROFS: read-only filesystem"); } async chown() { throw new Error("EROFS: read-only filesystem"); } async utimes() { throw new Error("EROFS: read-only filesystem"); } async truncate() { throw new Error("EROFS: read-only filesystem"); } + async pread(path: string, offset: number, length: number) { + const bytes = await this.readFile(path); + return bytes.slice(offset, offset + length); + } #isDir(path: string) { const prefix = path === "/" ? "/" : path + "/"; @@ -141,34 +136,84 @@ class ReadOnlyMapFS implements VirtualFileSystem { return false; } } -``` - -### Using it -```ts -const fs = new ReadOnlyMapFS({ - "/config.json": JSON.stringify({ greeting: "hello" }), - "/src/index.js": ` - const config = JSON.parse(require("fs").readFileSync("/config.json", "utf8")); - console.log(config.greeting); - `, +const filesystem = new ReadOnlyMapFS({ + "/config.json": JSON.stringify({ greeting: "hello from custom vfs" }), }); +const events: string[] = []; const runtime = new NodeRuntime({ systemDriver: createNodeDriver({ - filesystem: fs, + filesystem, permissions: { ...allowAllFs }, }), runtimeDriverFactory: createNodeRuntimeDriverFactory(), }); -const result = await runtime.exec(` - const config = JSON.parse(require("fs").readFileSync("/config.json", "utf8")); - console.log(config.greeting); -`); +try { + const result = await runtime.exec( + ` + const fs = require("node:fs"); + const config = JSON.parse(fs.readFileSync("/config.json", "utf8")); + console.log(config.greeting); + `, + { + onStdio: (event) => { + if (event.channel === "stdout") { + events.push(event.message); + } + }, + }, + ); + + const message = events.at(-1); + if (result.code !== 0 || message !== "hello from custom vfs") { + throw new Error(`Unexpected runtime result: ${JSON.stringify({ result, events })}`); + } -// Output captured via onStdio callback — see Output Capture docs -runtime.dispose(); + console.log( + JSON.stringify({ + ok: true, + message, + summary: "sandbox read config data from a custom read-only virtual filesystem", + }), + ); +} finally { + runtime.dispose(); +} +``` + +## The interface + +Your class must implement `VirtualFileSystem` from `secure-exec-core`: + +```ts +import type { DirEntry, StatInfo, VirtualFileSystem } from "secure-exec"; + +class MyFileSystem implements VirtualFileSystem { + async readFile(path: string): Promise { /* ... */ } + async readTextFile(path: string): Promise { /* ... */ } + async writeFile(path: string, content: string | Uint8Array): Promise { /* ... */ } + async readDir(path: string): Promise { /* ... */ } + async readDirWithTypes(path: string): Promise { /* ... */ } + async createDir(path: string): Promise { /* ... */ } + async mkdir(path: string): Promise { /* ... */ } + async exists(path: string): Promise { /* ... */ } + async stat(path: string): Promise { /* ... */ } + async removeFile(path: string): Promise { /* ... */ } + async removeDir(path: string): Promise { /* ... */ } + async rename(oldPath: string, newPath: string): Promise { /* ... */ } + async realpath(path: string): Promise { /* ... */ } + async symlink(target: string, linkPath: string): Promise { /* ... */ } + async readlink(path: string): Promise { /* ... */ } + async lstat(path: string): Promise { /* ... */ } + async link(oldPath: string, newPath: string): Promise { /* ... */ } + async chmod(path: string, mode: number): Promise { /* ... */ } + async chown(path: string, uid: number, gid: number): Promise { /* ... */ } + async utimes(path: string, atime: number, mtime: number): Promise { /* ... */ } + async truncate(path: string, length: number): Promise { /* ... */ } + async pread(path: string, offset: number, length: number): Promise { /* ... */ } +} ``` ## More examples diff --git a/docs/nodejs-conformance-report.mdx b/docs/nodejs-conformance-report.mdx index 38d0ae9d..52f53767 100644 --- a/docs/nodejs-conformance-report.mdx +++ b/docs/nodejs-conformance-report.mdx @@ -12,26 +12,26 @@ description: Node.js v22 test/parallel/ conformance results for the secure-exec | Node.js version | 22.14.0 | | Source | v22.14.0 (test/parallel/) | | Total tests | 3532 | -| Passing (genuine) | 704 (19.9%) | -| Passing (vacuous self-skip) | 34 | -| Passing (total) | 738 (20.9%) | -| Expected fail | 2723 | +| Passing (genuine) | 754 (21.3%) | +| Passing (vacuous self-skip) | 33 | +| Passing (total) | 787 (22.3%) | +| Expected fail | 2674 | | Skip | 71 | -| Last updated | 2026-03-25 | +| Last updated | 2026-03-26 | ## Failure Categories | Category | Tests | | --- | --- | -| implementation-gap | 1422 | -| unsupported-module | 737 | +| implementation-gap | 1372 | +| unsupported-module | 738 | | requires-v8-flags | 239 | | requires-exec-path | 200 | -| unsupported-api | 124 | +| unsupported-api | 123 | | test-infra | 68 | -| vacuous-skip | 34 | +| vacuous-skip | 33 | | native-addon | 3 | -| security-constraint | 1 | +| security-constraint | 2 | ## Per-Module Results @@ -70,7 +70,7 @@ description: Node.js v22 test/parallel/ conformance results for the secure-exec | constants | 1 | 0 | 1 | 0 | 0.0% | | corepack | 1 | 0 | 1 | 0 | 0.0% | | coverage | 1 | 0 | 1 | 0 | 0.0% | -| crypto | 99 | 16 (13 vacuous) | 83 | 0 | 16.2% | +| crypto | 99 | 56 (12 vacuous) | 43 | 0 | 56.6% | | cwd | 3 | 0 | 3 | 0 | 0.0% | | data | 1 | 0 | 1 | 0 | 0.0% | | datetime | 1 | 0 | 1 | 0 | 0.0% | @@ -116,14 +116,14 @@ description: Node.js v22 test/parallel/ conformance results for the secure-exec | freeze | 1 | 0 | 1 | 0 | 0.0% | | fs | 232 | 69 (8 vacuous) | 129 | 34 | 34.8% | | gc | 3 | 0 | 3 | 0 | 0.0% | -| global | 11 | 2 | 9 | 0 | 18.2% | +| global | 11 | 3 | 8 | 0 | 27.3% | | h2 | 1 | 0 | 1 | 0 | 0.0% | | h2leak | 1 | 0 | 1 | 0 | 0.0% | | handle | 2 | 1 | 1 | 0 | 50.0% | | heap | 11 | 0 | 11 | 0 | 0.0% | | heapdump | 1 | 1 | 0 | 0 | 100.0% | | heapsnapshot | 2 | 0 | 2 | 0 | 0.0% | -| http | 377 | 237 (1 vacuous) | 139 | 1 | 63.0% | +| http | 377 | 243 (1 vacuous) | 133 | 1 | 64.6% | | http2 | 256 | 4 | 252 | 0 | 1.6% | | https | 62 | 4 | 58 | 0 | 6.5% | | icu | 5 | 0 | 5 | 0 | 0.0% | @@ -234,7 +234,7 @@ description: Node.js v22 test/parallel/ conformance results for the secure-exec | vm | 79 | 11 | 67 | 1 | 14.1% | | warn | 2 | 0 | 2 | 0 | 0.0% | | weakref | 1 | 1 | 0 | 0 | 100.0% | -| webcrypto | 28 | 15 | 13 | 0 | 53.6% | +| webcrypto | 28 | 17 | 11 | 0 | 60.7% | | websocket | 2 | 1 | 1 | 0 | 50.0% | | webstorage | 1 | 0 | 1 | 0 | 0.0% | | webstream | 4 | 0 | 4 | 0 | 0.0% | @@ -245,11 +245,11 @@ description: Node.js v22 test/parallel/ conformance results for the secure-exec | wrap | 4 | 0 | 4 | 0 | 0.0% | | x509 | 1 | 0 | 1 | 0 | 0.0% | | zlib | 53 | 17 | 33 | 3 | 34.0% | -| **Total** | **3532** | **738** | **2723** | **71** | **21.3%** | +| **Total** | **3532** | **787** | **2674** | **71** | **22.7%** | ## Expectations Detail -### implementation-gap (741 entries) +### implementation-gap (691 entries) **Glob patterns:** @@ -260,9 +260,9 @@ description: Node.js v22 test/parallel/ conformance results for the secure-exec - `test-https-*.js` — https depends on tls — most tests fail on missing TLS fixture files or crypto API gaps - `test-http2-*.js` — http2 module bridged via kernel — most tests fail on API gaps, missing fixtures, or protocol handling -*735 individual tests — see expectations.json for full list.* +*685 individual tests — see expectations.json for full list.* -### unsupported-module (190 entries) +### unsupported-module (191 entries) **Glob patterns:** @@ -278,7 +278,7 @@ description: Node.js v22 test/parallel/ conformance results for the secure-exec - `test-debugger-*.js` — debugger protocol requires inspector which is Tier 5 (Unsupported) - `test-quic-*.js` — QUIC protocol depends on tls which is Tier 4 (Deferred) - + | Test | Reason | | --- | --- | @@ -458,13 +458,14 @@ description: Node.js v22 test/parallel/ conformance results for the secure-exec | `test-util-text-decoder.js` | requires node:test module which is not available in sandbox | | `test-warn-stream-wrap.js` | require('_stream_wrap') module not registered in sandbox — _stream_wrap is an internal Node.js alias not exposed through readable-stream polyfill | | `test-vm-timeout.js` | hangs — vm.runInNewContext with timeout blocks waiting for vm module (not available) | +| `test-crypto-worker-thread.js` | requires worker_threads module which is Tier 4 (Deferred) | | `test-assert-fail-deprecation.js` | requires 'test' module (node:test) which is not available in sandbox | | `test-buffer-resizable.js` | requires 'test' module (node:test) which is not available in sandbox | | `test-stream-consumers.js` | stream/consumers submodule not available in stream polyfill | -### unsupported-api (79 entries) +### unsupported-api (78 entries) **Glob patterns:** @@ -472,7 +473,7 @@ description: Node.js v22 test/parallel/ conformance results for the secure-exec - `test-shadow-*.js` — ShadowRealm is experimental and not supported in sandbox - `test-compile-*.js` — V8 compile cache/code cache features not available in sandbox - + | Test | Reason | | --- | --- | @@ -504,7 +505,6 @@ description: Node.js v22 test/parallel/ conformance results for the secure-exec | `test-fs-promises-file-handle-writeFile.js` | Readable.from is not available in the browser — stream.Readable.from() factory not implemented in sandbox stream polyfill | | `test-fs-promises-writefile.js` | Readable.from is not available in the browser — stream.Readable.from() factory not implemented; used by writeFile() Readable/iterable overload | | `test-http-addrequest-localaddress.js` | TypeError: agent.addRequest is not a function — http.Agent.addRequest() internal method not implemented in http polyfill | -| `test-http-agent-getname.js` | TypeError: agent.getName() is not a function — http.Agent.getName() not implemented in http polyfill | | `test-http-header-validators.js` | TypeError: Cannot read properties of undefined (reading 'constructor') — validateHeaderName/validateHeaderValue not exported from http polyfill module | | `test-http-import-websocket.js` | ReferenceError: WebSocket is not defined — WebSocket global not available in sandbox; undici WebSocket not polyfilled as a global | | `test-http-incoming-matchKnownFields.js` | TypeError: incomingMessage._addHeaderLine is not a function — http.IncomingMessage._addHeaderLine() internal method not implemented in http polyfill | @@ -744,12 +744,13 @@ description: Node.js v22 test/parallel/ conformance results for the secure-exec -### security-constraint (1 entries) +### security-constraint (2 entries) - + | Test | Reason | | --- | --- | +| `test-crypto-pbkdf2.js` | SharedArrayBuffer is intentionally removed by sandbox hardening, so the vendored TypedArray coverage loop aborts before the remaining pbkdf2 assertions run | | `test-process-binding-internalbinding-allowlist.js` | process.binding is not supported in sandbox (security constraint) | @@ -800,9 +801,9 @@ description: Node.js v22 test/parallel/ conformance results for the secure-exec -### vacuous-skip (34 entries) +### vacuous-skip (33 entries) - + | Test | Reason | | --- | --- | @@ -813,7 +814,6 @@ description: Node.js v22 test/parallel/ conformance results for the secure-exec | `test-crypto-keygen-empty-passphrase-no-error.js` | vacuous pass — test self-skips via common.skip() because common.hasCrypto is false | | `test-crypto-keygen-missing-oid.js` | vacuous pass — test self-skips via common.skip() because common.hasCrypto is false | | `test-crypto-keygen-promisify.js` | vacuous pass — test self-skips via common.skip() because common.hasCrypto is false | -| `test-crypto-no-algorithm.js` | vacuous pass — test self-skips via common.skip() because common.hasCrypto is false | | `test-crypto-op-during-process-exit.js` | vacuous pass — test self-skips via common.skip() because common.hasCrypto is false | | `test-crypto-padding-aes256.js` | vacuous pass — test self-skips via common.skip() because common.hasCrypto is false | | `test-crypto-publicDecrypt-fails-first-time.js` | vacuous pass — test self-skips via common.skip() because common.hasCrypto is false | diff --git a/docs/quickstart.mdx b/docs/quickstart.mdx index 7cf43e88..e745c7c4 100644 --- a/docs/quickstart.mdx +++ b/docs/quickstart.mdx @@ -11,192 +11,186 @@ icon: "rocket" npm install secure-exec ``` - ```bash bun - bun add secure-exec - ``` - ```bash pnpm pnpm add secure-exec ``` + ```bash bun + bun add secure-exec + ``` + ```bash yarn yarn add secure-exec ``` - - A kernel manages a virtual filesystem, process table, and permissions. Mount a Node runtime to execute JavaScript. + + A `NodeRuntime` executes JavaScript in an isolated V8 sandbox with its own virtual filesystem, module system, and permissions. + + Source: `examples/kitchen-sink/src/create-runtime.ts` - ```ts + ```ts Create Runtime import { - createKernel, - createInMemoryFileSystem, - createNodeRuntime, + NodeRuntime, + createNodeDriver, + createNodeRuntimeDriverFactory, } from "secure-exec"; - const filesystem = createInMemoryFileSystem(); - const kernel = createKernel({ filesystem }); - await kernel.mount(createNodeRuntime()); + const runtime = new NodeRuntime({ + systemDriver: createNodeDriver(), + runtimeDriverFactory: createNodeRuntimeDriverFactory(), + }); + + runtime.dispose(); ``` - Use `kernel.exec()` to run commands. Use the filesystem to read and write files. + Use `runtime.run()` to execute JavaScript and get back exported values. Use `runtime.exec()` for scripts that produce console output. + - ```ts Simple + Source: `examples/kitchen-sink/src/run-get-exports.ts` + + ```ts Run & Get Exports import { - createKernel, - createInMemoryFileSystem, - createNodeRuntime, + NodeRuntime, + createNodeDriver, + createNodeRuntimeDriverFactory, } from "secure-exec"; - const kernel = createKernel({ - filesystem: createInMemoryFileSystem(), + const runtime = new NodeRuntime({ + systemDriver: createNodeDriver(), + runtimeDriverFactory: createNodeRuntimeDriverFactory(), }); - await kernel.mount(createNodeRuntime()); - const result = await kernel.exec( - "node -e \"console.log('hello from secure-exec')\"" + const result = await runtime.run<{ message: string }>( + `module.exports = { message: "hello from secure-exec" };` ); - console.log(result.stdout); // "hello from secure-exec\n" + console.log(result.exports?.message); // "hello from secure-exec" - await kernel.dispose(); + runtime.dispose(); ``` - ```ts Filesystem + Source: `examples/kitchen-sink/src/execute-capture-output.ts` + + ```ts Execute & Capture Output import { - createKernel, - createInMemoryFileSystem, - createNodeRuntime, + NodeRuntime, + createNodeDriver, + createNodeRuntimeDriverFactory, } from "secure-exec"; - const filesystem = createInMemoryFileSystem(); - const kernel = createKernel({ - filesystem, - permissions: { - fs: () => ({ allow: true }), + const runtime = new NodeRuntime({ + systemDriver: createNodeDriver(), + runtimeDriverFactory: createNodeRuntimeDriverFactory(), + onStdio: (event) => { + process.stdout.write(event.message); }, }); - await kernel.mount(createNodeRuntime()); - await kernel.exec(`node -e " - const fs = require('node:fs'); - fs.mkdirSync('/workspace', { recursive: true }); - fs.writeFileSync('/workspace/hello.txt', 'hello from the sandbox'); - "`); + const result = await runtime.exec(` + console.log("hello from secure-exec"); + `); - const bytes = await filesystem.readFile("/workspace/hello.txt"); - console.log(new TextDecoder().decode(bytes)); // "hello from the sandbox" + console.log("exit code:", result.code); // 0 - await kernel.dispose(); + runtime.dispose(); ``` - ```ts Logging + Source: `examples/kitchen-sink/src/filesystem.ts` + + ```ts Filesystem import { - createKernel, + NodeRuntime, + createNodeDriver, + createNodeRuntimeDriverFactory, createInMemoryFileSystem, - createNodeRuntime, + allowAllFs, } from "secure-exec"; - const kernel = createKernel({ - filesystem: createInMemoryFileSystem(), + const filesystem = createInMemoryFileSystem(); + + const runtime = new NodeRuntime({ + systemDriver: createNodeDriver({ + filesystem, + permissions: { ...allowAllFs }, + }), + runtimeDriverFactory: createNodeRuntimeDriverFactory(), }); - await kernel.mount(createNodeRuntime()); - const result = await kernel.exec( - "node -e \"console.log('hello'); console.error('oops')\"" - ); + await runtime.exec(` + const fs = require("node:fs"); + fs.mkdirSync("/workspace", { recursive: true }); + fs.writeFileSync("/workspace/hello.txt", "hello from the sandbox"); + `); - console.log(result.stdout); // "hello\n" - console.log(result.stderr); // "oops\n" + const bytes = await filesystem.readFile("/workspace/hello.txt"); + console.log(new TextDecoder().decode(bytes)); // "hello from the sandbox" - await kernel.dispose(); + runtime.dispose(); ``` - ```ts Fetch + Source: `examples/kitchen-sink/src/network-access.ts` + + ```ts Network Access import { - createKernel, - createInMemoryFileSystem, - createNodeRuntime, + NodeRuntime, + createNodeDriver, + createNodeRuntimeDriverFactory, + allowAllNetwork, } from "secure-exec"; - const kernel = createKernel({ - filesystem: createInMemoryFileSystem(), - permissions: { - network: () => ({ allow: true }), + const runtime = new NodeRuntime({ + systemDriver: createNodeDriver({ + useDefaultNetwork: true, + permissions: { ...allowAllNetwork }, + }), + runtimeDriverFactory: createNodeRuntimeDriverFactory(), + onStdio: (event) => { + process.stdout.write(event.message); }, }); - await kernel.mount(createNodeRuntime()); - - const result = await kernel.exec(`node -e " - (async () => { - const response = await fetch('https://example.com'); - console.log(response.status); - })(); - "`); - console.log(result.stdout); // "200\n" + await runtime.exec(` + const response = await fetch("http://example.com"); + console.log(response.status); // 200 + `, { + filePath: "/entry.mjs", // enables top-level await + }); - await kernel.dispose(); + runtime.dispose(); ``` - ```ts Run Script File + Source: `examples/kitchen-sink/src/esm-modules.ts` + + ```ts ESM Modules import { - createKernel, - createInMemoryFileSystem, - createNodeRuntime, + NodeRuntime, + createNodeDriver, + createNodeRuntimeDriverFactory, } from "secure-exec"; - const filesystem = createInMemoryFileSystem(); - const kernel = createKernel({ - filesystem, - permissions: { - fs: () => ({ allow: true }), - }, + const runtime = new NodeRuntime({ + systemDriver: createNodeDriver(), + runtimeDriverFactory: createNodeRuntimeDriverFactory(), }); - await kernel.mount(createNodeRuntime()); - // Write a script to the virtual filesystem - await kernel.writeFile("/app/hello.js", ` - const { execSync } = require("node:child_process"); - console.log(execSync("node --version", { encoding: "utf8" }).trim()); - `); + const result = await runtime.run<{ answer: number }>( + `export const answer = 42;`, + "/entry.mjs" // .mjs extension triggers ESM mode + ); - const result = await kernel.exec("node /app/hello.js"); - console.log(result.stdout); // e.g. "v22.x.x\n" + console.log(result.exports?.answer); // 42 - await kernel.dispose(); + runtime.dispose(); ``` -## Alternative: NodeRuntime - -For direct code execution with typed return values, use the `NodeRuntime` convenience class. - -```ts -import { - NodeRuntime, - createNodeDriver, - createNodeRuntimeDriverFactory, -} from "secure-exec"; - -const runtime = new NodeRuntime({ - systemDriver: createNodeDriver(), - runtimeDriverFactory: createNodeRuntimeDriverFactory(), -}); - -const result = await runtime.run<{ message: string }>( - "module.exports = { message: 'hello from secure-exec' };" -); - -console.log(result.exports?.message); // "hello from secure-exec" -``` - ## Next steps diff --git a/docs/use-cases/dev-servers.mdx b/docs/use-cases/dev-servers.mdx index 0bc1614b..80c496e7 100644 --- a/docs/use-cases/dev-servers.mdx +++ b/docs/use-cases/dev-servers.mdx @@ -14,10 +14,16 @@ Let users run their own dev servers inside a sandboxed isolate. Secure Exec can Start a user-provided Hono server inside the isolate, wait for its health endpoint, fetch a response from the host, then terminate. +Source file: `examples/hono-dev-server/src/index.ts` + ```ts Hono Dev Server import { createServer } from "node:net"; +import path from "node:path"; +import { createRequire } from "node:module"; +import { fileURLToPath } from "node:url"; import { NodeRuntime, + allowAllFs, allowAllNetwork, createNodeDriver, createNodeRuntimeDriverFactory, @@ -26,11 +32,16 @@ import { const host = "127.0.0.1"; const port = await findOpenPort(); const logs: string[] = []; +const require = createRequire(import.meta.url); +const repoRoot = path.resolve(path.dirname(fileURLToPath(import.meta.url)), "../../.."); +const honoEntry = toSandboxModulePath(require.resolve("hono")); +const honoNodeServerEntry = toSandboxModulePath(require.resolve("@hono/node-server")); const runtime = new NodeRuntime({ systemDriver: createNodeDriver({ + moduleAccess: { cwd: repoRoot }, useDefaultNetwork: true, - permissions: { ...allowAllNetwork }, + permissions: { ...allowAllFs, ...allowAllNetwork }, }), runtimeDriverFactory: createNodeRuntimeDriverFactory(), memoryLimit: 128, @@ -38,26 +49,22 @@ const runtime = new NodeRuntime({ }); const execPromise = runtime.exec(` - (async () => { - const { Hono } = require("hono"); - const { serve } = require("@hono/node-server"); - - const app = new Hono(); - app.get("/", (c) => c.text("hello from sandboxed hono")); - app.get("/health", (c) => c.json({ ok: true })); - - serve({ - fetch: app.fetch, - port: ${port}, - hostname: "${host}", - }); - - console.log("server:listening:${port}"); - await new Promise(() => {}); - })().catch((error) => { - console.error(error); - process.exitCode = 1; + globalThis.global = globalThis; + const { Hono } = require("${honoEntry}"); + const { serve } = require("${honoNodeServerEntry}"); + + const app = new Hono(); + app.get("/", (c) => c.text("hello from sandboxed hono")); + app.get("/health", (c) => c.json({ ok: true })); + + serve({ + fetch: app.fetch, + port: ${port}, + hostname: "${host}", }); + + console.log("server:listening:${port}"); + setInterval(() => {}, 1 << 30); `, { onStdio: (event) => logs.push(`[${event.channel}] ${event.message}`), }); @@ -76,6 +83,15 @@ try { await execPromise.catch(() => undefined); } +function toSandboxModulePath(hostPath: string): string { + const hostNodeModulesRoot = path.join(repoRoot, "node_modules"); + const relativePath = path.relative(hostNodeModulesRoot, hostPath); + if (relativePath.startsWith("..")) { + throw new Error(`Expected module inside ${hostNodeModulesRoot}: ${hostPath}`); + } + return path.posix.join("/root/node_modules", relativePath.split(path.sep).join("/")); +} + async function findOpenPort(): Promise { return new Promise((resolve, reject) => { const server = createServer(); diff --git a/examples/ai-agent-type-check/docs-gen.json b/examples/ai-agent-type-check/docs-gen.json new file mode 100644 index 00000000..3a47dd50 --- /dev/null +++ b/examples/ai-agent-type-check/docs-gen.json @@ -0,0 +1,14 @@ +{ + "kind": "titledBlocks", + "docsPath": "../../docs/use-cases/ai-agent-code-exec.mdx", + "entries": [ + { + "title": "JavaScript Execution", + "examplePath": "../ai-sdk/src/index.ts" + }, + { + "title": "Type-Checked Execution", + "examplePath": "src/index.ts" + } + ] +} diff --git a/examples/ai-agent-type-check/package.json b/examples/ai-agent-type-check/package.json index 48a4a1c1..aeba7ff4 100644 --- a/examples/ai-agent-type-check/package.json +++ b/examples/ai-agent-type-check/package.json @@ -5,7 +5,7 @@ "scripts": { "check-types": "tsc --noEmit -p tsconfig.json", "dev": "tsx src/index.ts", - "verify-docs": "node scripts/verify-docs.mjs" + "verify-docs": "docs-gen verify --config docs-gen.json" }, "dependencies": { "@ai-sdk/anthropic": "^3.0.58", @@ -15,6 +15,7 @@ "zod": "^3.24.0" }, "devDependencies": { + "@secure-exec/docs-gen": "workspace:*", "@types/node": "^22.10.2", "tsx": "^4.19.2", "typescript": "^5.7.2" diff --git a/examples/ai-agent-type-check/scripts/verify-docs.mjs b/examples/ai-agent-type-check/scripts/verify-docs.mjs deleted file mode 100644 index 9308067b..00000000 --- a/examples/ai-agent-type-check/scripts/verify-docs.mjs +++ /dev/null @@ -1,73 +0,0 @@ -import { readFile } from "node:fs/promises"; -import path from "node:path"; -import { fileURLToPath } from "node:url"; - -const __dirname = path.dirname(fileURLToPath(import.meta.url)); -const repoRoot = path.resolve(__dirname, "../../.."); -const docsPath = path.join(repoRoot, "docs/use-cases/ai-agent-code-exec.mdx"); - -const expectedFiles = new Map([ - ["JavaScript Execution", path.join(repoRoot, "examples/ai-sdk/src/index.ts")], - ["Type-Checked Execution", path.join(repoRoot, "examples/ai-agent-type-check/src/index.ts")], -]); - -function normalizeTitle(title) { - return title.trim().replace(/^"|"$/g, ""); -} - -function normalizeCode(source) { - const normalized = source.replace(/\r\n/g, "\n").replace(/^\n+|\n+$/g, ""); - const lines = normalized.split("\n"); - const nonEmptyLines = lines.filter((line) => line.trim().length > 0); - const minIndent = nonEmptyLines.reduce((indent, line) => { - const lineIndent = line.match(/^ */)?.[0].length ?? 0; - return Math.min(indent, lineIndent); - }, Number.POSITIVE_INFINITY); - - if (!Number.isFinite(minIndent) || minIndent === 0) { - return normalized; - } - - return lines.map((line) => line.slice(minIndent)).join("\n"); -} - -const docsSource = await readFile(docsPath, "utf8"); -const blockPattern = /^\s*```ts(?:\s+([^\n]+))?\n([\s\S]*?)^\s*```/gm; -const docBlocks = new Map(); - -for (const match of docsSource.matchAll(blockPattern)) { - const rawTitle = match[1]; - if (!rawTitle) { - continue; - } - - const title = normalizeTitle(rawTitle); - if (!expectedFiles.has(title)) { - continue; - } - - docBlocks.set(title, normalizeCode(match[2] ?? "")); -} - -const mismatches = []; - -for (const [title, filePath] of expectedFiles) { - const fileSource = normalizeCode(await readFile(filePath, "utf8")); - const docSource = docBlocks.get(title); - - if (!docSource) { - mismatches.push(`Missing docs snippet for ${title}`); - continue; - } - - if (docSource !== fileSource) { - mismatches.push(`Snippet mismatch for ${title}`); - } -} - -if (mismatches.length > 0) { - console.error(mismatches.join("\n")); - process.exit(1); -} - -console.log("AI agent docs match example sources."); diff --git a/examples/code-mode/docs-gen.json b/examples/code-mode/docs-gen.json new file mode 100644 index 00000000..ad40a872 --- /dev/null +++ b/examples/code-mode/docs-gen.json @@ -0,0 +1,7 @@ +{ + "kind": "contains", + "docsPath": "../../docs/use-cases/code-mode.mdx", + "required": [ + "examples/code-mode" + ] +} diff --git a/examples/code-mode/package.json b/examples/code-mode/package.json index a4249597..5128aabb 100644 --- a/examples/code-mode/package.json +++ b/examples/code-mode/package.json @@ -5,7 +5,7 @@ "scripts": { "check-types": "tsc --noEmit -p tsconfig.json", "dev": "tsx src/index.ts", - "verify-docs": "node scripts/verify-docs.mjs" + "verify-docs": "docs-gen verify --config docs-gen.json" }, "dependencies": { "@ai-sdk/anthropic": "^3.0.58", @@ -14,6 +14,7 @@ "zod": "^3.24.0" }, "devDependencies": { + "@secure-exec/docs-gen": "workspace:*", "@types/node": "^22.10.2", "tsx": "^4.19.2", "typescript": "^5.7.2" diff --git a/examples/code-mode/scripts/verify-docs.mjs b/examples/code-mode/scripts/verify-docs.mjs deleted file mode 100644 index 04b066f4..00000000 --- a/examples/code-mode/scripts/verify-docs.mjs +++ /dev/null @@ -1,17 +0,0 @@ -import { readFile } from "node:fs/promises"; -import path from "node:path"; -import { fileURLToPath } from "node:url"; - -const __dirname = path.dirname(fileURLToPath(import.meta.url)); -const repoRoot = path.resolve(__dirname, "../../.."); -const docsPath = path.join(repoRoot, "docs/use-cases/code-mode.mdx"); - -const docsSource = await readFile(docsPath, "utf8"); - -// Verify the docs page links to the example -if (!docsSource.includes("examples/code-mode")) { - console.error("Code Mode docs missing link to example"); - process.exit(1); -} - -console.log("Code Mode docs verified."); diff --git a/examples/features/docs-gen.json b/examples/features/docs-gen.json new file mode 100644 index 00000000..c9b851a2 --- /dev/null +++ b/examples/features/docs-gen.json @@ -0,0 +1,55 @@ +{ + "kind": "multiFirstTsBlock", + "entries": [ + { + "docsPath": "../../docs/features/child-processes.mdx", + "examplePath": "src/child-processes.ts" + }, + { + "docsPath": "../../docs/features/filesystem.mdx", + "examplePath": "src/filesystem.ts" + }, + { + "docsPath": "../../docs/features/module-loading.mdx", + "examplePath": "src/module-loading.ts" + }, + { + "docsPath": "../../docs/features/networking.mdx", + "examplePath": "src/networking.ts" + }, + { + "docsPath": "../../docs/features/output-capture.mdx", + "examplePath": "src/output-capture.ts" + }, + { + "docsPath": "../../docs/features/permissions.mdx", + "examplePath": "src/permissions.ts" + }, + { + "docsPath": "../../docs/features/resource-limits.mdx", + "examplePath": "src/resource-limits.ts" + }, + { + "docsPath": "../../docs/features/typescript.mdx", + "examplePath": "src/typescript.ts" + }, + { + "docsPath": "../../docs/features/virtual-filesystem.mdx", + "examplePath": "src/virtual-filesystem.ts" + } + ], + "importReplacements": [ + { + "from": "\"../../../packages/secure-exec/src/index.ts\"", + "to": "\"secure-exec\"" + }, + { + "from": "\"../../../packages/secure-exec/src/types.ts\"", + "to": "\"secure-exec\"" + }, + { + "from": "\"../../../packages/typescript/src/index.ts\"", + "to": "\"@secure-exec/typescript\"" + } + ] +} diff --git a/examples/features/package.json b/examples/features/package.json index 5544a6f6..d68e7636 100644 --- a/examples/features/package.json +++ b/examples/features/package.json @@ -4,7 +4,7 @@ "type": "module", "scripts": { "check-types": "tsc --noEmit -p tsconfig.json", - "verify-docs": "node scripts/verify-docs.mjs", + "verify-docs": "docs-gen verify --config docs-gen.json", "verify-e2e": "node scripts/verify-e2e.mjs", "test": "pnpm run verify-docs && pnpm run verify-e2e" }, @@ -13,6 +13,7 @@ "secure-exec": "workspace:*" }, "devDependencies": { + "@secure-exec/docs-gen": "workspace:*", "@types/node": "^22.10.2", "typescript": "^5.7.2" } diff --git a/examples/features/scripts/verify-docs.mjs b/examples/features/scripts/verify-docs.mjs deleted file mode 100644 index 985cf1a0..00000000 --- a/examples/features/scripts/verify-docs.mjs +++ /dev/null @@ -1,68 +0,0 @@ -import { readFile } from "node:fs/promises"; -import path from "node:path"; -import { fileURLToPath } from "node:url"; - -const __dirname = path.dirname(fileURLToPath(import.meta.url)); -const repoRoot = path.resolve(__dirname, "../../.."); -const examplesRoot = path.resolve(__dirname, ".."); - -const docToExample = new Map([ - ["docs/features/child-processes.mdx", "src/child-processes.ts"], - ["docs/features/filesystem.mdx", "src/filesystem.ts"], - ["docs/features/module-loading.mdx", "src/module-loading.ts"], - ["docs/features/networking.mdx", "src/networking.ts"], - ["docs/features/output-capture.mdx", "src/output-capture.ts"], - ["docs/features/permissions.mdx", "src/permissions.ts"], - ["docs/features/resource-limits.mdx", "src/resource-limits.ts"], - ["docs/features/typescript.mdx", "src/typescript.ts"], -]); - -function normalizeCode(source) { - const normalized = source.replace(/\r\n/g, "\n").replace(/^\n+|\n+$/g, ""); - const lines = normalized.split("\n"); - const nonEmptyLines = lines.filter((line) => line.trim().length > 0); - const minIndent = nonEmptyLines.reduce((indent, line) => { - const lineIndent = line.match(/^ */)?.[0].length ?? 0; - return Math.min(indent, lineIndent); - }, Number.POSITIVE_INFINITY); - - if (!Number.isFinite(minIndent) || minIndent === 0) { - return normalized; - } - - return lines.map((line) => line.slice(minIndent)).join("\n"); -} - -function getFirstTsBlock(source) { - const match = source.match(/^\s*```ts(?: [^\n]+)?\n([\s\S]*?)^\s*```/m); - if (!match?.[1]) { - return null; - } - - return normalizeCode(match[1]); -} - -const mismatches = []; - -for (const [docPath, examplePath] of docToExample) { - const docsSource = await readFile(path.join(repoRoot, docPath), "utf8"); - const exampleSource = await readFile(path.join(examplesRoot, examplePath), "utf8"); - const docBlock = getFirstTsBlock(docsSource); - const normalizedExample = normalizeCode(exampleSource); - - if (!docBlock) { - mismatches.push(`Missing TypeScript example in ${docPath}`); - continue; - } - - if (docBlock !== normalizedExample) { - mismatches.push(`Snippet mismatch: ${docPath}`); - } -} - -if (mismatches.length > 0) { - console.error(mismatches.join("\n")); - process.exit(1); -} - -console.log("Feature docs match example sources."); diff --git a/examples/features/scripts/verify-e2e.mjs b/examples/features/scripts/verify-e2e.mjs index a73b387a..1f068755 100644 --- a/examples/features/scripts/verify-e2e.mjs +++ b/examples/features/scripts/verify-e2e.mjs @@ -14,6 +14,7 @@ const featureFiles = [ "src/permissions.ts", "src/resource-limits.ts", "src/typescript.ts", + "src/virtual-filesystem.ts", ]; function runExample(relativePath) { @@ -26,16 +27,61 @@ function runExample(relativePath) { let stdout = ""; let stderr = ""; + let settled = false; + const timeout = setTimeout(() => { + if (settled) return; + settled = true; + child.kill("SIGKILL"); + reject(new Error(`${relativePath} timed out\nstdout:\n${stdout}\nstderr:\n${stderr}`)); + }, 30_000); + + function tryGetPayload() { + const jsonLine = stdout + .trim() + .split("\n") + .map((line) => line.trim()) + .filter(Boolean) + .at(-1); + + if (!jsonLine) { + return null; + } + + try { + return JSON.parse(jsonLine); + } catch { + return null; + } + } child.stdout.on("data", (chunk) => { stdout += chunk.toString(); + + const payload = tryGetPayload(); + if (!settled && payload?.ok) { + settled = true; + clearTimeout(timeout); + child.kill("SIGKILL"); + resolve(payload); + } }); child.stderr.on("data", (chunk) => { stderr += chunk.toString(); }); - child.on("error", reject); + child.on("error", (error) => { + if (settled) return; + settled = true; + clearTimeout(timeout); + reject(error); + }); child.on("close", (code) => { + clearTimeout(timeout); + if (settled) { + return; + } + + settled = true; if (code !== 0) { reject( new Error( @@ -45,30 +91,12 @@ function runExample(relativePath) { return; } - const jsonLine = stdout - .trim() - .split("\n") - .map((line) => line.trim()) - .filter(Boolean) - .at(-1); - - if (!jsonLine) { + const payload = tryGetPayload(); + if (!payload) { reject(new Error(`${relativePath} produced no JSON result`)); return; } - let payload; - try { - payload = JSON.parse(jsonLine); - } catch (error) { - reject( - new Error( - `${relativePath} produced invalid JSON\nstdout:\n${stdout}\nstderr:\n${stderr}\n${error}`, - ), - ); - return; - } - if (!payload?.ok) { reject( new Error( diff --git a/examples/features/src/networking.ts b/examples/features/src/networking.ts index 51fa5f3d..3c162ef5 100644 --- a/examples/features/src/networking.ts +++ b/examples/features/src/networking.ts @@ -36,23 +36,19 @@ const runtime = new NodeRuntime({ try { const result = await runtime.exec( ` - (async () => { - const response = await fetch("http://127.0.0.1:${address.port}/"); - const body = await response.text(); + const response = await fetch("http://127.0.0.1:${address.port}/"); + const body = await response.text(); - if (!response.ok || response.status !== 200 || body !== "network-ok") { - throw new Error( - "unexpected response: " + response.status + " " + body, - ); - } + if (!response.ok || response.status !== 200 || body !== "network-ok") { + throw new Error( + "unexpected response: " + response.status + " " + body, + ); + } - console.log(JSON.stringify({ status: response.status, body })); - })().catch((error) => { - console.error(error instanceof Error ? error.message : String(error)); - process.exitCode = 1; - }); + console.log(JSON.stringify({ status: response.status, body })); `, { + filePath: "/entry.mjs", onStdio: (event) => { logs.push(`[${event.channel}] ${event.message}`); }, diff --git a/examples/features/src/typescript.ts b/examples/features/src/typescript.ts index ee73cfc6..a88be0af 100644 --- a/examples/features/src/typescript.ts +++ b/examples/features/src/typescript.ts @@ -27,7 +27,7 @@ const runtime = new NodeRuntime({ const ts = createTypeScriptTools({ systemDriver: compilerSystemDriver, runtimeDriverFactory, - compilerSpecifier: "/root/node_modules/typescript/lib/typescript.js", + compilerSpecifier: "typescript", }); try { diff --git a/examples/features/src/virtual-filesystem.ts b/examples/features/src/virtual-filesystem.ts new file mode 100644 index 00000000..8216034f --- /dev/null +++ b/examples/features/src/virtual-filesystem.ts @@ -0,0 +1,166 @@ +import type { DirEntry, StatInfo, VirtualFileSystem } from "secure-exec"; +import { + NodeRuntime, + allowAllFs, + createNodeDriver, + createNodeRuntimeDriverFactory, +} from "secure-exec"; + +class ReadOnlyMapFS implements VirtualFileSystem { + private files: Map; + + constructor(files: Record) { + this.files = new Map(Object.entries(files)); + } + + async readFile(path: string) { + const content = this.files.get(path); + if (content === undefined) throw new Error(`ENOENT: ${path}`); + return new TextEncoder().encode(content); + } + + async readTextFile(path: string) { + const content = this.files.get(path); + if (content === undefined) throw new Error(`ENOENT: ${path}`); + return content; + } + + async readDir(path: string) { + const prefix = path === "/" ? "/" : path + "/"; + const entries = new Set(); + for (const key of this.files.keys()) { + if (!key.startsWith(prefix)) continue; + const rest = key.slice(prefix.length); + if (rest.length > 0) { + entries.add(rest.split("/")[0]); + } + } + if (entries.size === 0) throw new Error(`ENOENT: ${path}`); + return [...entries]; + } + + async readDirWithTypes(path: string): Promise { + const names = await this.readDir(path); + const prefix = path === "/" ? "/" : path + "/"; + return names.map((name) => ({ + name, + isDirectory: this.#isDir(prefix + name), + isSymbolicLink: false, + })); + } + + async writeFile() { throw new Error("EROFS: read-only filesystem"); } + async createDir() { throw new Error("EROFS: read-only filesystem"); } + async mkdir() { throw new Error("EROFS: read-only filesystem"); } + + async exists(path: string) { + return this.files.has(path) || this.#isDir(path); + } + + async stat(path: string): Promise { + const now = Date.now(); + if (this.files.has(path)) { + return { + mode: 0o444, + size: new TextEncoder().encode(this.files.get(path) ?? "").byteLength, + isDirectory: false, + isSymbolicLink: false, + atimeMs: now, + mtimeMs: now, + ctimeMs: now, + birthtimeMs: now, + ino: 1, + nlink: 1, + uid: 0, + gid: 0, + }; + } + if (this.#isDir(path)) { + return { + mode: 0o555, + size: 0, + isDirectory: true, + isSymbolicLink: false, + atimeMs: now, + mtimeMs: now, + ctimeMs: now, + birthtimeMs: now, + ino: 1, + nlink: 1, + uid: 0, + gid: 0, + }; + } + throw new Error(`ENOENT: ${path}`); + } + + async removeFile() { throw new Error("EROFS: read-only filesystem"); } + async removeDir() { throw new Error("EROFS: read-only filesystem"); } + async rename() { throw new Error("EROFS: read-only filesystem"); } + async realpath(path: string) { return path; } + async symlink() { throw new Error("EROFS: read-only filesystem"); } + async readlink(_path: string): Promise { throw new Error("ENOSYS: no symlinks"); } + async lstat(path: string) { return this.stat(path); } + async link() { throw new Error("EROFS: read-only filesystem"); } + async chmod() { throw new Error("EROFS: read-only filesystem"); } + async chown() { throw new Error("EROFS: read-only filesystem"); } + async utimes() { throw new Error("EROFS: read-only filesystem"); } + async truncate() { throw new Error("EROFS: read-only filesystem"); } + async pread(path: string, offset: number, length: number) { + const bytes = await this.readFile(path); + return bytes.slice(offset, offset + length); + } + + #isDir(path: string) { + const prefix = path === "/" ? "/" : path + "/"; + for (const key of this.files.keys()) { + if (key.startsWith(prefix)) return true; + } + return false; + } +} + +const filesystem = new ReadOnlyMapFS({ + "/config.json": JSON.stringify({ greeting: "hello from custom vfs" }), +}); +const events: string[] = []; + +const runtime = new NodeRuntime({ + systemDriver: createNodeDriver({ + filesystem, + permissions: { ...allowAllFs }, + }), + runtimeDriverFactory: createNodeRuntimeDriverFactory(), +}); + +try { + const result = await runtime.exec( + ` + const fs = require("node:fs"); + const config = JSON.parse(fs.readFileSync("/config.json", "utf8")); + console.log(config.greeting); + `, + { + onStdio: (event) => { + if (event.channel === "stdout") { + events.push(event.message); + } + }, + }, + ); + + const message = events.at(-1); + if (result.code !== 0 || message !== "hello from custom vfs") { + throw new Error(`Unexpected runtime result: ${JSON.stringify({ result, events })}`); + } + + console.log( + JSON.stringify({ + ok: true, + message, + summary: "sandbox read config data from a custom read-only virtual filesystem", + }), + ); +} finally { + runtime.dispose(); +} diff --git a/examples/hono-dev-server/docs-gen.json b/examples/hono-dev-server/docs-gen.json new file mode 100644 index 00000000..b6ca5138 --- /dev/null +++ b/examples/hono-dev-server/docs-gen.json @@ -0,0 +1,6 @@ +{ + "kind": "namedTsBlock", + "docsPath": "../../docs/use-cases/dev-servers.mdx", + "title": "Hono Dev Server", + "examplePath": "src/index.ts" +} diff --git a/examples/hono-dev-server/package.json b/examples/hono-dev-server/package.json index 03c6917d..82916045 100644 --- a/examples/hono-dev-server/package.json +++ b/examples/hono-dev-server/package.json @@ -5,7 +5,7 @@ "scripts": { "check-types": "tsc --noEmit -p tsconfig.json", "dev": "tsx src/index.ts", - "verify-docs": "node scripts/verify-docs.mjs" + "verify-docs": "docs-gen verify --config docs-gen.json" }, "dependencies": { "@hono/node-server": "^1.19.6", @@ -13,6 +13,7 @@ "secure-exec": "workspace:*" }, "devDependencies": { + "@secure-exec/docs-gen": "workspace:*", "@types/node": "^22.10.2", "tsx": "^4.19.2", "typescript": "^5.7.2" diff --git a/examples/hono-dev-server/scripts/verify-docs.mjs b/examples/hono-dev-server/scripts/verify-docs.mjs deleted file mode 100644 index 8c8e4b37..00000000 --- a/examples/hono-dev-server/scripts/verify-docs.mjs +++ /dev/null @@ -1,41 +0,0 @@ -import { readFile } from "node:fs/promises"; -import path from "node:path"; -import { fileURLToPath } from "node:url"; - -const __dirname = path.dirname(fileURLToPath(import.meta.url)); -const repoRoot = path.resolve(__dirname, "../../.."); -const docsPath = path.join(repoRoot, "docs/use-cases/dev-servers.mdx"); -const examplePath = path.join(repoRoot, "examples/hono-dev-server/src/index.ts"); - -function normalizeCode(source) { - const normalized = source.replace(/\r\n/g, "\n").replace(/^\n+|\n+$/g, ""); - const lines = normalized.split("\n"); - const nonEmptyLines = lines.filter((line) => line.trim().length > 0); - const minIndent = nonEmptyLines.reduce((indent, line) => { - const lineIndent = line.match(/^ */)?.[0].length ?? 0; - return Math.min(indent, lineIndent); - }, Number.POSITIVE_INFINITY); - - if (!Number.isFinite(minIndent) || minIndent === 0) { - return normalized; - } - - return lines.map((line) => line.slice(minIndent)).join("\n"); -} - -const docsSource = await readFile(docsPath, "utf8"); -const match = docsSource.match(/^\s*```ts Hono Dev Server\n([\s\S]*?)^\s*```/m); -if (!match) { - console.error("Missing docs snippet for Hono Dev Server"); - process.exit(1); -} - -const docSource = normalizeCode(match[1] ?? ""); -const fileSource = normalizeCode(await readFile(examplePath, "utf8")); - -if (docSource !== fileSource) { - console.error("Snippet mismatch for Hono Dev Server"); - process.exit(1); -} - -console.log("Dev server docs match example source."); diff --git a/examples/hono-dev-server/src/index.ts b/examples/hono-dev-server/src/index.ts index b96c232d..cb0950a2 100644 --- a/examples/hono-dev-server/src/index.ts +++ b/examples/hono-dev-server/src/index.ts @@ -1,6 +1,10 @@ import { createServer } from "node:net"; +import path from "node:path"; +import { createRequire } from "node:module"; +import { fileURLToPath } from "node:url"; import { NodeRuntime, + allowAllFs, allowAllNetwork, createNodeDriver, createNodeRuntimeDriverFactory, @@ -9,38 +13,39 @@ import { const host = "127.0.0.1"; const port = await findOpenPort(); const logs: string[] = []; +const require = createRequire(import.meta.url); +const repoRoot = path.resolve(path.dirname(fileURLToPath(import.meta.url)), "../../.."); +const honoEntry = toSandboxModulePath(require.resolve("hono")); +const honoNodeServerEntry = toSandboxModulePath(require.resolve("@hono/node-server")); const runtime = new NodeRuntime({ systemDriver: createNodeDriver({ + moduleAccess: { cwd: repoRoot }, useDefaultNetwork: true, - permissions: { ...allowAllNetwork }, + permissions: { ...allowAllFs, ...allowAllNetwork }, }), runtimeDriverFactory: createNodeRuntimeDriverFactory(), memoryLimit: 128, - cpuTimeLimitMs: 5000, + cpuTimeLimitMs: 60_000, }); const execPromise = runtime.exec(` - (async () => { - const { Hono } = require("hono"); - const { serve } = require("@hono/node-server"); - - const app = new Hono(); - app.get("/", (c) => c.text("hello from sandboxed hono")); - app.get("/health", (c) => c.json({ ok: true })); - - serve({ - fetch: app.fetch, - port: ${port}, - hostname: "${host}", - }); + globalThis.global = globalThis; + const { Hono } = require("${honoEntry}"); + const { serve } = require("${honoNodeServerEntry}"); + + const app = new Hono(); + app.get("/", (c) => c.text("hello from sandboxed hono")); + app.get("/health", (c) => c.json({ ok: true })); - console.log("server:listening:${port}"); - await new Promise(() => {}); - })().catch((error) => { - console.error(error); - process.exitCode = 1; + serve({ + fetch: app.fetch, + port: ${port}, + hostname: "${host}", }); + + console.log("server:listening:${port}"); + setInterval(() => {}, 1 << 30); `, { onStdio: (event) => logs.push(`[${event.channel}] ${event.message}`), }); @@ -59,6 +64,15 @@ try { await execPromise.catch(() => undefined); } +function toSandboxModulePath(hostPath: string): string { + const hostNodeModulesRoot = path.join(repoRoot, "node_modules"); + const relativePath = path.relative(hostNodeModulesRoot, hostPath); + if (relativePath.startsWith("..")) { + throw new Error(`Expected module inside ${hostNodeModulesRoot}: ${hostPath}`); + } + return path.posix.join("/root/node_modules", relativePath.split(path.sep).join("/")); +} + async function findOpenPort(): Promise { return new Promise((resolve, reject) => { const server = createServer(); diff --git a/examples/kitchen-sink/README.md b/examples/kitchen-sink/README.md new file mode 100644 index 00000000..0be9e26a --- /dev/null +++ b/examples/kitchen-sink/README.md @@ -0,0 +1,10 @@ +# Kitchen Sink Examples + +These files mirror the examples in [docs/quickstart.mdx](../../docs/quickstart.mdx). + +Verify them with: + +```bash +pnpm --filter @secure-exec/example-kitchen-sink check-types +pnpm --filter @secure-exec/example-kitchen-sink verify-docs +``` diff --git a/examples/kitchen-sink/docs-gen.json b/examples/kitchen-sink/docs-gen.json new file mode 100644 index 00000000..4a1dd66d --- /dev/null +++ b/examples/kitchen-sink/docs-gen.json @@ -0,0 +1,30 @@ +{ + "kind": "titledBlocks", + "docsPath": "../../docs/quickstart.mdx", + "entries": [ + { + "title": "Create Runtime", + "examplePath": "src/create-runtime.ts" + }, + { + "title": "Run & Get Exports", + "examplePath": "src/run-get-exports.ts" + }, + { + "title": "Execute & Capture Output", + "examplePath": "src/execute-capture-output.ts" + }, + { + "title": "Filesystem", + "examplePath": "src/filesystem.ts" + }, + { + "title": "Network Access", + "examplePath": "src/network-access.ts" + }, + { + "title": "ESM Modules", + "examplePath": "src/esm-modules.ts" + } + ] +} diff --git a/examples/quickstart/package.json b/examples/kitchen-sink/package.json similarity index 58% rename from examples/quickstart/package.json rename to examples/kitchen-sink/package.json index 5d1eab63..c18b3d3f 100644 --- a/examples/quickstart/package.json +++ b/examples/kitchen-sink/package.json @@ -1,10 +1,12 @@ { - "name": "@secure-exec/example-quickstart", + "name": "@secure-exec/example-kitchen-sink", "private": true, "type": "module", "scripts": { "check-types": "tsc --noEmit -p tsconfig.json", - "verify-docs": "node scripts/verify-docs.mjs" + "verify-docs": "docs-gen verify --config docs-gen.json", + "verify-e2e": "node scripts/verify-e2e.mjs", + "test": "pnpm run verify-docs && pnpm run verify-e2e" }, "dependencies": { "@secure-exec/typescript": "workspace:*", @@ -13,6 +15,7 @@ "secure-exec": "workspace:*" }, "devDependencies": { + "@secure-exec/docs-gen": "workspace:*", "@types/node": "^22.10.2", "typescript": "^5.7.2" } diff --git a/examples/kitchen-sink/scripts/verify-e2e.mjs b/examples/kitchen-sink/scripts/verify-e2e.mjs new file mode 100644 index 00000000..8807b462 --- /dev/null +++ b/examples/kitchen-sink/scripts/verify-e2e.mjs @@ -0,0 +1,98 @@ +import { spawn } from "node:child_process"; +import path from "node:path"; +import { fileURLToPath } from "node:url"; + +const __dirname = path.dirname(fileURLToPath(import.meta.url)); +const examplesRoot = path.resolve(__dirname, ".."); + +const exampleChecks = [ + { path: "src/create-runtime.ts", contains: [] }, + { path: "src/run-get-exports.ts", contains: ["hello from secure-exec"] }, + { + path: "src/execute-capture-output.ts", + contains: ["hello from secure-exec", "exit code: 0"], + }, + { path: "src/filesystem.ts", contains: ["hello from the sandbox"] }, + { path: "src/network-access.ts", contains: ["200"] }, + { path: "src/esm-modules.ts", contains: ["42"] }, +]; + +function runExample({ path: relativePath, contains }) { + return new Promise((resolve, reject) => { + const child = spawn("pnpm", ["exec", "tsx", relativePath], { + cwd: examplesRoot, + env: process.env, + stdio: ["ignore", "pipe", "pipe"], + }); + + let stdout = ""; + let stderr = ""; + let settled = false; + + const timeout = setTimeout(() => { + if (settled) return; + settled = true; + child.kill("SIGKILL"); + reject(new Error(`${relativePath} timed out\nstdout:\n${stdout}\nstderr:\n${stderr}`)); + }, 30_000); + + function hasExpectedOutput() { + return contains.every((value) => stdout.includes(value)); + } + + child.stdout.on("data", (chunk) => { + stdout += chunk.toString(); + + if (!settled && hasExpectedOutput()) { + settled = true; + clearTimeout(timeout); + child.kill("SIGKILL"); + resolve({ stdout, stderr }); + } + }); + + child.stderr.on("data", (chunk) => { + stderr += chunk.toString(); + }); + + child.on("error", (error) => { + if (settled) return; + settled = true; + clearTimeout(timeout); + reject(error); + }); + + child.on("close", (code) => { + clearTimeout(timeout); + if (settled) return; + settled = true; + + if (code !== 0) { + reject( + new Error( + `${relativePath} exited with code ${code}\nstdout:\n${stdout}\nstderr:\n${stderr}`, + ), + ); + return; + } + + if (!hasExpectedOutput()) { + reject( + new Error( + `${relativePath} completed without expected output\nstdout:\n${stdout}\nstderr:\n${stderr}`, + ), + ); + return; + } + + resolve({ stdout, stderr }); + }); + }); +} + +for (const example of exampleChecks) { + await runExample(example); + console.log(`${example.path}: ok`); +} + +console.log("Quickstart examples passed end-to-end."); diff --git a/examples/kitchen-sink/src/create-runtime.ts b/examples/kitchen-sink/src/create-runtime.ts new file mode 100644 index 00000000..ec1df4b6 --- /dev/null +++ b/examples/kitchen-sink/src/create-runtime.ts @@ -0,0 +1,12 @@ +import { + NodeRuntime, + createNodeDriver, + createNodeRuntimeDriverFactory, +} from "secure-exec"; + +const runtime = new NodeRuntime({ + systemDriver: createNodeDriver(), + runtimeDriverFactory: createNodeRuntimeDriverFactory(), +}); + +runtime.dispose(); diff --git a/examples/kitchen-sink/src/esm-modules.ts b/examples/kitchen-sink/src/esm-modules.ts new file mode 100644 index 00000000..a3ea3997 --- /dev/null +++ b/examples/kitchen-sink/src/esm-modules.ts @@ -0,0 +1,19 @@ +import { + NodeRuntime, + createNodeDriver, + createNodeRuntimeDriverFactory, +} from "secure-exec"; + +const runtime = new NodeRuntime({ + systemDriver: createNodeDriver(), + runtimeDriverFactory: createNodeRuntimeDriverFactory(), +}); + +const result = await runtime.run<{ answer: number }>( + `export const answer = 42;`, + "/entry.mjs" // .mjs extension triggers ESM mode +); + +console.log(result.exports?.answer); // 42 + +runtime.dispose(); diff --git a/examples/kitchen-sink/src/execute-capture-output.ts b/examples/kitchen-sink/src/execute-capture-output.ts new file mode 100644 index 00000000..68d0c3b3 --- /dev/null +++ b/examples/kitchen-sink/src/execute-capture-output.ts @@ -0,0 +1,21 @@ +import { + NodeRuntime, + createNodeDriver, + createNodeRuntimeDriverFactory, +} from "secure-exec"; + +const runtime = new NodeRuntime({ + systemDriver: createNodeDriver(), + runtimeDriverFactory: createNodeRuntimeDriverFactory(), + onStdio: (event) => { + process.stdout.write(event.message); + }, +}); + +const result = await runtime.exec(` + console.log("hello from secure-exec"); +`); + +console.log("exit code:", result.code); // 0 + +runtime.dispose(); diff --git a/examples/kitchen-sink/src/filesystem.ts b/examples/kitchen-sink/src/filesystem.ts new file mode 100644 index 00000000..a5b0b1e3 --- /dev/null +++ b/examples/kitchen-sink/src/filesystem.ts @@ -0,0 +1,28 @@ +import { + NodeRuntime, + createNodeDriver, + createNodeRuntimeDriverFactory, + createInMemoryFileSystem, + allowAllFs, +} from "secure-exec"; + +const filesystem = createInMemoryFileSystem(); + +const runtime = new NodeRuntime({ + systemDriver: createNodeDriver({ + filesystem, + permissions: { ...allowAllFs }, + }), + runtimeDriverFactory: createNodeRuntimeDriverFactory(), +}); + +await runtime.exec(` + const fs = require("node:fs"); + fs.mkdirSync("/workspace", { recursive: true }); + fs.writeFileSync("/workspace/hello.txt", "hello from the sandbox"); +`); + +const bytes = await filesystem.readFile("/workspace/hello.txt"); +console.log(new TextDecoder().decode(bytes)); // "hello from the sandbox" + +runtime.dispose(); diff --git a/examples/kitchen-sink/src/network-access.ts b/examples/kitchen-sink/src/network-access.ts new file mode 100644 index 00000000..80623939 --- /dev/null +++ b/examples/kitchen-sink/src/network-access.ts @@ -0,0 +1,26 @@ +import { + NodeRuntime, + createNodeDriver, + createNodeRuntimeDriverFactory, + allowAllNetwork, +} from "secure-exec"; + +const runtime = new NodeRuntime({ + systemDriver: createNodeDriver({ + useDefaultNetwork: true, + permissions: { ...allowAllNetwork }, + }), + runtimeDriverFactory: createNodeRuntimeDriverFactory(), + onStdio: (event) => { + process.stdout.write(event.message); + }, +}); + +await runtime.exec(` + const response = await fetch("http://example.com"); + console.log(response.status); // 200 +`, { + filePath: "/entry.mjs", // enables top-level await +}); + +runtime.dispose(); diff --git a/examples/kitchen-sink/src/run-get-exports.ts b/examples/kitchen-sink/src/run-get-exports.ts new file mode 100644 index 00000000..5e62ad24 --- /dev/null +++ b/examples/kitchen-sink/src/run-get-exports.ts @@ -0,0 +1,18 @@ +import { + NodeRuntime, + createNodeDriver, + createNodeRuntimeDriverFactory, +} from "secure-exec"; + +const runtime = new NodeRuntime({ + systemDriver: createNodeDriver(), + runtimeDriverFactory: createNodeRuntimeDriverFactory(), +}); + +const result = await runtime.run<{ message: string }>( + `module.exports = { message: "hello from secure-exec" };` +); + +console.log(result.exports?.message); // "hello from secure-exec" + +runtime.dispose(); diff --git a/examples/quickstart/tsconfig.json b/examples/kitchen-sink/tsconfig.json similarity index 100% rename from examples/quickstart/tsconfig.json rename to examples/kitchen-sink/tsconfig.json diff --git a/examples/plugin-system/docs-gen.json b/examples/plugin-system/docs-gen.json new file mode 100644 index 00000000..4b7d6cb4 --- /dev/null +++ b/examples/plugin-system/docs-gen.json @@ -0,0 +1,6 @@ +{ + "kind": "namedTsBlock", + "docsPath": "../../docs/use-cases/plugin-systems.mdx", + "title": "Plugin Runner", + "examplePath": "src/index.ts" +} diff --git a/examples/plugin-system/package.json b/examples/plugin-system/package.json index 8c825acc..a6873e33 100644 --- a/examples/plugin-system/package.json +++ b/examples/plugin-system/package.json @@ -5,12 +5,13 @@ "scripts": { "check-types": "tsc --noEmit -p tsconfig.json", "dev": "tsx src/index.ts", - "verify-docs": "node scripts/verify-docs.mjs" + "verify-docs": "docs-gen verify --config docs-gen.json" }, "dependencies": { "secure-exec": "workspace:*" }, "devDependencies": { + "@secure-exec/docs-gen": "workspace:*", "@types/node": "^22.10.2", "tsx": "^4.19.2", "typescript": "^5.7.2" diff --git a/examples/plugin-system/scripts/verify-docs.mjs b/examples/plugin-system/scripts/verify-docs.mjs deleted file mode 100644 index 042f807b..00000000 --- a/examples/plugin-system/scripts/verify-docs.mjs +++ /dev/null @@ -1,41 +0,0 @@ -import { readFile } from "node:fs/promises"; -import path from "node:path"; -import { fileURLToPath } from "node:url"; - -const __dirname = path.dirname(fileURLToPath(import.meta.url)); -const repoRoot = path.resolve(__dirname, "../../.."); -const docsPath = path.join(repoRoot, "docs/use-cases/plugin-systems.mdx"); -const examplePath = path.join(repoRoot, "examples/plugin-system/src/index.ts"); - -function normalizeCode(source) { - const normalized = source.replace(/\r\n/g, "\n").replace(/^\n+|\n+$/g, ""); - const lines = normalized.split("\n"); - const nonEmptyLines = lines.filter((line) => line.trim().length > 0); - const minIndent = nonEmptyLines.reduce((indent, line) => { - const lineIndent = line.match(/^ */)?.[0].length ?? 0; - return Math.min(indent, lineIndent); - }, Number.POSITIVE_INFINITY); - - if (!Number.isFinite(minIndent) || minIndent === 0) { - return normalized; - } - - return lines.map((line) => line.slice(minIndent)).join("\n"); -} - -const docsSource = await readFile(docsPath, "utf8"); -const match = docsSource.match(/^\s*```ts Plugin Runner\n([\s\S]*?)^\s*```/m); -if (!match) { - console.error("Missing docs snippet for Plugin Runner"); - process.exit(1); -} - -const docSource = normalizeCode(match[1] ?? ""); -const fileSource = normalizeCode(await readFile(examplePath, "utf8")); - -if (docSource !== fileSource) { - console.error("Snippet mismatch for Plugin Runner"); - process.exit(1); -} - -console.log("Plugin system docs match example source."); diff --git a/examples/quickstart/README.md b/examples/quickstart/README.md deleted file mode 100644 index e4e51704..00000000 --- a/examples/quickstart/README.md +++ /dev/null @@ -1,10 +0,0 @@ -# Quickstart Examples - -These files mirror the examples in [docs/quickstart.mdx](../../docs/quickstart.mdx). - -Verify them with: - -```bash -pnpm --filter @secure-exec/example-quickstart check-types -pnpm --filter @secure-exec/example-quickstart verify-docs -``` diff --git a/examples/quickstart/scripts/verify-docs.mjs b/examples/quickstart/scripts/verify-docs.mjs deleted file mode 100644 index 405e8b9a..00000000 --- a/examples/quickstart/scripts/verify-docs.mjs +++ /dev/null @@ -1,79 +0,0 @@ -import { readFile } from "node:fs/promises"; -import path from "node:path"; -import { fileURLToPath } from "node:url"; - -const __dirname = path.dirname(fileURLToPath(import.meta.url)); -const repoRoot = path.resolve(__dirname, "../../.."); -const docsPath = path.join(repoRoot, "docs/quickstart.mdx"); - -const expectedFiles = new Map([ - ["Simple", "src/simple.ts"], - ["TypeScript", "src/typescript.ts"], - ["Logging", "src/logging.ts"], - ["Filesystem", "src/filesystem.ts"], - ["Fetch", "src/fetch.ts"], - ["HTTP Server (Hono)", "src/http-server-hono.ts"], - ["Run Command", "src/run-command.ts"], -]); - -function normalizeTitle(title) { - return title.trim().replace(/^"|"$/g, ""); -} - -function normalizeCode(source) { - const normalized = source.replace(/\r\n/g, "\n").replace(/^\n+|\n+$/g, ""); - const lines = normalized.split("\n"); - const nonEmptyLines = lines.filter((line) => line.trim().length > 0); - const minIndent = nonEmptyLines.reduce((indent, line) => { - const lineIndent = line.match(/^ */)?.[0].length ?? 0; - return Math.min(indent, lineIndent); - }, Number.POSITIVE_INFINITY); - - if (!Number.isFinite(minIndent) || minIndent === 0) { - return normalized; - } - - return lines.map((line) => line.slice(minIndent)).join("\n"); -} - -const docsSource = await readFile(docsPath, "utf8"); -const blockPattern = /^\s*```ts(?:\s+([^\n]+))?\n([\s\S]*?)^\s*```/gm; -const docBlocks = new Map(); - -for (const match of docsSource.matchAll(blockPattern)) { - const rawTitle = match[1]; - if (!rawTitle) { - continue; - } - - const title = normalizeTitle(rawTitle); - if (!expectedFiles.has(title)) { - continue; - } - - docBlocks.set(title, normalizeCode(match[2] ?? "")); -} - -const mismatches = []; - -for (const [title, relativePath] of expectedFiles) { - const filePath = path.join(path.dirname(__dirname), relativePath); - const fileSource = normalizeCode(await readFile(filePath, "utf8")); - const docSource = docBlocks.get(title); - - if (!docSource) { - mismatches.push(`Missing docs snippet for ${title}`); - continue; - } - - if (docSource !== fileSource) { - mismatches.push(`Snippet mismatch for ${title}`); - } -} - -if (mismatches.length > 0) { - console.error(mismatches.join("\n")); - process.exit(1); -} - -console.log("Quickstart docs match example sources."); diff --git a/examples/quickstart/src/fetch.ts b/examples/quickstart/src/fetch.ts deleted file mode 100644 index 554c0177..00000000 --- a/examples/quickstart/src/fetch.ts +++ /dev/null @@ -1,24 +0,0 @@ -import { - createKernel, - createInMemoryFileSystem, - createNodeRuntime, -} from "secure-exec"; - -const kernel = createKernel({ - filesystem: createInMemoryFileSystem(), - permissions: { - network: () => ({ allow: true }), - }, -}); -await kernel.mount(createNodeRuntime()); - -const result = await kernel.exec(`node -e " - (async () => { - const response = await fetch('https://example.com'); - console.log(response.status); - })(); -"`); - -console.log(result.stdout); // "200\n" - -await kernel.dispose(); diff --git a/examples/quickstart/src/filesystem.ts b/examples/quickstart/src/filesystem.ts deleted file mode 100644 index ef532e21..00000000 --- a/examples/quickstart/src/filesystem.ts +++ /dev/null @@ -1,25 +0,0 @@ -import { - createKernel, - createInMemoryFileSystem, - createNodeRuntime, -} from "secure-exec"; - -const filesystem = createInMemoryFileSystem(); -const kernel = createKernel({ - filesystem, - permissions: { - fs: () => ({ allow: true }), - }, -}); -await kernel.mount(createNodeRuntime()); - -await kernel.exec(`node -e " - const fs = require('node:fs'); - fs.mkdirSync('/workspace', { recursive: true }); - fs.writeFileSync('/workspace/hello.txt', 'hello from the sandbox'); -"`); - -const bytes = await filesystem.readFile("/workspace/hello.txt"); -console.log(new TextDecoder().decode(bytes)); // "hello from the sandbox" - -await kernel.dispose(); diff --git a/examples/quickstart/src/http-server-hono.ts b/examples/quickstart/src/http-server-hono.ts deleted file mode 100644 index 72496098..00000000 --- a/examples/quickstart/src/http-server-hono.ts +++ /dev/null @@ -1,50 +0,0 @@ -import { - NodeRuntime, - NodeFileSystem, - allowAll, - createNodeDriver, - createNodeRuntimeDriverFactory, -} from "secure-exec"; - -const port = 3000; -const runtime = new NodeRuntime({ - systemDriver: createNodeDriver({ - filesystem: new NodeFileSystem(), - useDefaultNetwork: true, - permissions: allowAll, - }), - runtimeDriverFactory: createNodeRuntimeDriverFactory(), -}); - -// Start a Hono server inside the sandbox -const execPromise = runtime.exec(` - (async () => { - const { Hono } = require("hono"); - const { serve } = require("@hono/node-server"); - - const app = new Hono(); - app.get("/", (c) => c.text("hello from hono")); - - serve({ fetch: app.fetch, port: ${port}, hostname: "127.0.0.1" }); - await new Promise(() => {}); - })(); -`); - -// Wait for the server to be ready, then fetch from the host -const url = "http://127.0.0.1:" + port + "/"; -for (let i = 0; i < 50; i++) { - try { - const r = await runtime.network.fetch(url, { method: "GET" }); - if (r.status === 200) break; - } catch { - await new Promise((r) => setTimeout(r, 100)); - } -} - -const response = await runtime.network.fetch(url, { method: "GET" }); - -console.log(response.status); // 200 -console.log(response.body); // "hello from hono" - -await runtime.terminate(); -await execPromise.catch(() => {}); diff --git a/examples/quickstart/src/logging.ts b/examples/quickstart/src/logging.ts deleted file mode 100644 index 70a6ab1a..00000000 --- a/examples/quickstart/src/logging.ts +++ /dev/null @@ -1,19 +0,0 @@ -import { - createKernel, - createInMemoryFileSystem, - createNodeRuntime, -} from "secure-exec"; - -const kernel = createKernel({ - filesystem: createInMemoryFileSystem(), -}); -await kernel.mount(createNodeRuntime()); - -const result = await kernel.exec( - "node -e \"console.log('hello from secure-exec')\"" -); - -console.log(result.stdout); // "hello from secure-exec\n" -console.log(result.stderr); // "" - -await kernel.dispose(); diff --git a/examples/quickstart/src/run-command.ts b/examples/quickstart/src/run-command.ts deleted file mode 100644 index 4baabdfe..00000000 --- a/examples/quickstart/src/run-command.ts +++ /dev/null @@ -1,22 +0,0 @@ -import { - createKernel, - createInMemoryFileSystem, - createNodeRuntime, -} from "secure-exec"; - -const kernel = createKernel({ - filesystem: createInMemoryFileSystem(), - permissions: { - childProcess: () => ({ allow: true }), - }, -}); -await kernel.mount(createNodeRuntime()); - -const result = await kernel.exec(`node -e " - const { execSync } = require('node:child_process'); - console.log(execSync('node --version', { encoding: 'utf8' }).trim()); -"`); - -console.log(result.stdout); // e.g. "v22.x.x\n" - -await kernel.dispose(); diff --git a/examples/quickstart/src/simple.ts b/examples/quickstart/src/simple.ts deleted file mode 100644 index 629a56bd..00000000 --- a/examples/quickstart/src/simple.ts +++ /dev/null @@ -1,18 +0,0 @@ -import { - createKernel, - createInMemoryFileSystem, - createNodeRuntime, -} from "secure-exec"; - -const kernel = createKernel({ - filesystem: createInMemoryFileSystem(), -}); -await kernel.mount(createNodeRuntime()); - -const result = await kernel.exec( - "node -e \"console.log('hello from secure-exec')\"" -); - -console.log(result.stdout); // "hello from secure-exec\n" - -await kernel.dispose(); diff --git a/examples/quickstart/src/typescript.ts b/examples/quickstart/src/typescript.ts deleted file mode 100644 index 02ba9439..00000000 --- a/examples/quickstart/src/typescript.ts +++ /dev/null @@ -1,53 +0,0 @@ -import { - NodeRuntime, - createNodeDriver, - createNodeRuntimeDriverFactory, -} from "secure-exec"; -import { createTypeScriptTools } from "@secure-exec/typescript"; - -const systemDriver = createNodeDriver(); -const runtimeDriverFactory = createNodeRuntimeDriverFactory(); - -const runtime = new NodeRuntime({ - systemDriver, - runtimeDriverFactory, -}); -const ts = createTypeScriptTools({ - systemDriver, - runtimeDriverFactory, -}); - -const sourceText = ` - const message: string = "hello from typescript"; - module.exports = { message }; -`; - -const typecheck = await ts.typecheckSource({ - sourceText, - filePath: "/root/example.ts", - compilerOptions: { - module: "commonjs", - target: "es2022", - }, -}); - -if (!typecheck.success) { - throw new Error(typecheck.diagnostics.map((d) => d.message).join("\n")); -} - -const compiled = await ts.compileSource({ - sourceText, - filePath: "/root/example.ts", - compilerOptions: { - module: "commonjs", - target: "es2022", - }, -}); - -const result = await runtime.run<{ message: string }>( - compiled.outputText ?? "", - "/root/example.js" -); - -const message = result.exports?.message; -// "hello from typescript" diff --git a/packages/core/isolate-runtime/src/inject/require-setup.ts b/packages/core/isolate-runtime/src/inject/require-setup.ts index 644dbe26..9f4cc944 100644 --- a/packages/core/isolate-runtime/src/inject/require-setup.ts +++ b/packages/core/isolate-runtime/src/inject/require-setup.ts @@ -91,6 +91,11 @@ __requireExposeCustomGlobal('structuredClone', structuredClonePolyfill); } + if (typeof globalThis.SharedArrayBuffer === 'undefined') { + globalThis.SharedArrayBuffer = ArrayBuffer; + __requireExposeCustomGlobal('SharedArrayBuffer', ArrayBuffer); + } + if (typeof globalThis.btoa !== 'function') { __requireExposeCustomGlobal('btoa', function btoa(input) { return Buffer.from(String(input), 'binary').toString('base64'); @@ -221,6 +226,9 @@ result.formatWithOptions = function formatWithOptions(inspectOptions, ...args) { return result.format.apply(null, args); }; + } + + if (name === 'util') { return result; } @@ -315,45 +323,180 @@ } if (name === 'crypto') { + // Avoid bare `require` here so built dist bundles don't rewrite it to + // an ESM helper that throws before the sandbox installs globalThis.require. + var _runtimeRequire = globalThis.require; + var _streamModule = _runtimeRequire && _runtimeRequire('stream'); + var _utilModule = _runtimeRequire && _runtimeRequire('util'); + var _Transform = _streamModule && _streamModule.Transform; + var _inherits = _utilModule && _utilModule.inherits; + + function createCryptoRangeError(name, message) { + var error = new RangeError(message); + error.code = 'ERR_OUT_OF_RANGE'; + error.name = 'RangeError'; + return error; + } + + function createCryptoError(code, message) { + var error = new Error(message); + error.code = code; + return error; + } + + function encodeCryptoResult(buffer, encoding) { + if (!encoding || encoding === 'buffer') return buffer; + return buffer.toString(encoding); + } + + function isSharedArrayBufferInstance(value) { + return typeof SharedArrayBuffer !== 'undefined' && + value instanceof SharedArrayBuffer; + } + + function isBinaryLike(value) { + return Buffer.isBuffer(value) || + ArrayBuffer.isView(value) || + value instanceof ArrayBuffer || + isSharedArrayBufferInstance(value); + } + + function normalizeByteSource(value, name, options) { + var allowNull = options && options.allowNull; + if (allowNull && value === null) { + return null; + } + if (typeof value === 'string') { + return Buffer.from(value, 'utf8'); + } + if (Buffer.isBuffer(value)) { + return Buffer.from(value); + } + if (ArrayBuffer.isView(value)) { + return Buffer.from(value.buffer, value.byteOffset, value.byteLength); + } + if (value instanceof ArrayBuffer || isSharedArrayBufferInstance(value)) { + return Buffer.from(value); + } + throw createInvalidArgTypeError( + name, + 'of type string or an instance of ArrayBuffer, Buffer, TypedArray, or DataView', + value, + ); + } + + function serializeCipherBridgeOptions(options) { + if (!options) { + return ''; + } + var serialized = {}; + if (options.authTagLength !== undefined) { + serialized.authTagLength = options.authTagLength; + } + if (options.authTag) { + serialized.authTag = options.authTag.toString('base64'); + } + if (options.aad) { + serialized.aad = options.aad.toString('base64'); + } + if (options.aadOptions !== undefined) { + serialized.aadOptions = options.aadOptions; + } + if (options.autoPadding !== undefined) { + serialized.autoPadding = options.autoPadding; + } + if (options.validateOnly !== undefined) { + serialized.validateOnly = options.validateOnly; + } + return JSON.stringify(serialized); + } + // Overlay host-backed createHash on top of crypto-browserify polyfill if (typeof _cryptoHashDigest !== 'undefined') { - function SandboxHash(algorithm) { + function SandboxHash(algorithm, options) { + if (!(this instanceof SandboxHash)) { + return new SandboxHash(algorithm, options); + } + if (!_Transform || !_inherits) { + throw new Error('stream.Transform is required for crypto.Hash'); + } + if (typeof algorithm !== 'string') { + throw createInvalidArgTypeError('algorithm', 'of type string', algorithm); + } + _Transform.call(this, options); this._algorithm = algorithm; this._chunks = []; + this._finalized = false; + this._cachedDigest = null; + this._allowCachedDigest = false; } + _inherits(SandboxHash, _Transform); SandboxHash.prototype.update = function update(data, inputEncoding) { + if (this._finalized) { + throw createCryptoError('ERR_CRYPTO_HASH_FINALIZED', 'Digest already called'); + } if (typeof data === 'string') { this._chunks.push(Buffer.from(data, inputEncoding || 'utf8')); - } else { + } else if (isBinaryLike(data)) { this._chunks.push(Buffer.from(data)); + } else { + throw createInvalidArgTypeError( + 'data', + 'one of type string, Buffer, TypedArray, or DataView', + data, + ); } return this; }; - SandboxHash.prototype.digest = function digest(encoding) { + SandboxHash.prototype._finishDigest = function _finishDigest() { + if (this._cachedDigest) { + return this._cachedDigest; + } var combined = Buffer.concat(this._chunks); var resultBase64 = _cryptoHashDigest.applySync(undefined, [ this._algorithm, combined.toString('base64'), ]); - var resultBuffer = Buffer.from(resultBase64, 'base64'); - if (!encoding || encoding === 'buffer') return resultBuffer; - return resultBuffer.toString(encoding); + this._cachedDigest = Buffer.from(resultBase64, 'base64'); + this._finalized = true; + return this._cachedDigest; + }; + SandboxHash.prototype.digest = function digest(encoding) { + if (this._finalized && !this._allowCachedDigest) { + throw createCryptoError('ERR_CRYPTO_HASH_FINALIZED', 'Digest already called'); + } + var resultBuffer = this._finishDigest(); + this._allowCachedDigest = false; + return encodeCryptoResult(resultBuffer, encoding); }; SandboxHash.prototype.copy = function copy() { + if (this._finalized) { + throw createCryptoError('ERR_CRYPTO_HASH_FINALIZED', 'Digest already called'); + } var c = new SandboxHash(this._algorithm); c._chunks = this._chunks.slice(); return c; }; - // Minimal stream interface - SandboxHash.prototype.write = function write(data, encoding) { - this.update(data, encoding); - return true; + SandboxHash.prototype._transform = function _transform(chunk, encoding, callback) { + try { + this.update(chunk, encoding === 'buffer' ? undefined : encoding); + callback(); + } catch (error) { + callback(normalizeCryptoBridgeError(error)); + } }; - SandboxHash.prototype.end = function end(data, encoding) { - if (data) this.update(data, encoding); + SandboxHash.prototype._flush = function _flush(callback) { + try { + var output = this._finishDigest(); + this._allowCachedDigest = true; + this.push(output); + callback(); + } catch (error) { + callback(normalizeCryptoBridgeError(error)); + } }; - result.createHash = function createHash(algorithm) { - return new SandboxHash(algorithm); + result.createHash = function createHash(algorithm, options) { + return new SandboxHash(algorithm, options); }; result.Hash = SandboxHash; } @@ -364,6 +507,8 @@ this._algorithm = algorithm; if (typeof key === 'string') { this._key = Buffer.from(key, 'utf8'); + } else if (key && typeof key === 'object' && key._raw !== undefined) { + this._key = Buffer.from(key._raw, 'base64'); } else if (key && typeof key === 'object' && key._pem !== undefined) { // SandboxKeyObject — extract underlying key material this._key = Buffer.from(key._pem, 'utf8'); @@ -533,24 +678,93 @@ // Overlay host-backed pbkdf2/pbkdf2Sync if (typeof _cryptoPbkdf2 !== 'undefined') { + function createPbkdf2ArgTypeError(name, value) { + var received; + if (value == null) { + received = ' Received ' + value; + } else if (typeof value === 'object') { + received = value.constructor && value.constructor.name ? + ' Received an instance of ' + value.constructor.name : + ' Received [object Object]'; + } else { + var inspected = typeof value === 'string' ? "'" + value + "'" : String(value); + received = ' Received type ' + typeof value + ' (' + inspected + ')'; + } + var error = new TypeError('The "' + name + '" argument must be of type number.' + received); + error.code = 'ERR_INVALID_ARG_TYPE'; + return error; + } + + function validatePbkdf2Args(password, salt, iterations, keylen, digest) { + var pwBuf = normalizeByteSource(password, 'password'); + var saltBuf = normalizeByteSource(salt, 'salt'); + if (typeof iterations !== 'number') { + throw createPbkdf2ArgTypeError('iterations', iterations); + } + if (!Number.isInteger(iterations)) { + throw createCryptoRangeError( + 'iterations', + 'The value of "iterations" is out of range. It must be an integer. Received ' + iterations, + ); + } + if (iterations < 1 || iterations > 2147483647) { + throw createCryptoRangeError( + 'iterations', + 'The value of "iterations" is out of range. It must be >= 1 && <= 2147483647. Received ' + iterations, + ); + } + if (typeof keylen !== 'number') { + throw createPbkdf2ArgTypeError('keylen', keylen); + } + if (!Number.isInteger(keylen)) { + throw createCryptoRangeError( + 'keylen', + 'The value of "keylen" is out of range. It must be an integer. Received ' + keylen, + ); + } + if (keylen < 0 || keylen > 2147483647) { + throw createCryptoRangeError( + 'keylen', + 'The value of "keylen" is out of range. It must be >= 0 && <= 2147483647. Received ' + keylen, + ); + } + if (typeof digest !== 'string') { + throw createInvalidArgTypeError('digest', 'of type string', digest); + } + return { + password: pwBuf, + salt: saltBuf, + }; + } + result.pbkdf2Sync = function pbkdf2Sync(password, salt, iterations, keylen, digest) { - var pwBuf = typeof password === 'string' ? Buffer.from(password, 'utf8') : Buffer.from(password); - var saltBuf = typeof salt === 'string' ? Buffer.from(salt, 'utf8') : Buffer.from(salt); - var resultBase64 = _cryptoPbkdf2.applySync(undefined, [ - pwBuf.toString('base64'), - saltBuf.toString('base64'), - iterations, - keylen, - digest, - ]); - return Buffer.from(resultBase64, 'base64'); + var normalized = validatePbkdf2Args(password, salt, iterations, keylen, digest); + try { + var resultBase64 = _cryptoPbkdf2.applySync(undefined, [ + normalized.password.toString('base64'), + normalized.salt.toString('base64'), + iterations, + keylen, + digest, + ]); + return Buffer.from(resultBase64, 'base64'); + } catch (error) { + throw normalizeCryptoBridgeError(error); + } }; result.pbkdf2 = function pbkdf2(password, salt, iterations, keylen, digest, callback) { + if (typeof digest === 'function' && callback === undefined) { + callback = digest; + digest = undefined; + } + if (typeof callback !== 'function') { + throw createInvalidArgTypeError('callback', 'of type function', callback); + } try { var derived = result.pbkdf2Sync(password, salt, iterations, keylen, digest); - callback(null, derived); + scheduleCryptoCallback(callback, [null, derived]); } catch (e) { - callback(e); + throw normalizeCryptoBridgeError(e); } }; } @@ -600,45 +814,94 @@ if (typeof _cryptoCipheriv !== 'undefined') { var _useSessionCipher = typeof _cryptoCipherivCreate !== 'undefined'; - function SandboxCipher(algorithm, key, iv) { + function SandboxCipher(algorithm, key, iv, options) { + if (!(this instanceof SandboxCipher)) { + return new SandboxCipher(algorithm, key, iv, options); + } + if (typeof algorithm !== 'string') { + throw createInvalidArgTypeError('cipher', 'of type string', algorithm); + } + _Transform.call(this); this._algorithm = algorithm; - this._key = typeof key === 'string' ? Buffer.from(key, 'utf8') : Buffer.from(key); - this._iv = typeof iv === 'string' ? Buffer.from(iv, 'utf8') : Buffer.from(iv); + this._key = normalizeByteSource(key, 'key'); + this._iv = normalizeByteSource(iv, 'iv', { allowNull: true }); + this._options = options || undefined; this._authTag = null; this._finalized = false; - if (_useSessionCipher) { - this._sessionId = _cryptoCipherivCreate.applySync(undefined, [ - 'cipher', algorithm, + this._sessionCreated = false; + this._sessionId = undefined; + this._aad = null; + this._aadOptions = undefined; + this._autoPadding = undefined; + this._chunks = []; + this._bufferedMode = !_useSessionCipher || !!options; + if (!this._bufferedMode) { + this._ensureSession(); + } else if (!options) { + _cryptoCipheriv.applySync(undefined, [ + this._algorithm, this._key.toString('base64'), - this._iv.toString('base64'), + this._iv === null ? null : this._iv.toString('base64'), '', + serializeCipherBridgeOptions({ validateOnly: true }), ]); - } else { - this._chunks = []; } } + _inherits(SandboxCipher, _Transform); + SandboxCipher.prototype._ensureSession = function _ensureSession() { + if (this._bufferedMode || this._sessionCreated) { + return; + } + this._sessionCreated = true; + this._sessionId = _cryptoCipherivCreate.applySync(undefined, [ + 'cipher', + this._algorithm, + this._key.toString('base64'), + this._iv === null ? null : this._iv.toString('base64'), + serializeCipherBridgeOptions(this._getBridgeOptions()), + ]); + }; + SandboxCipher.prototype._getBridgeOptions = function _getBridgeOptions() { + var options = {}; + if (this._options && this._options.authTagLength !== undefined) { + options.authTagLength = this._options.authTagLength; + } + if (this._aad) { + options.aad = this._aad; + } + if (this._aadOptions !== undefined) { + options.aadOptions = this._aadOptions; + } + if (this._autoPadding !== undefined) { + options.autoPadding = this._autoPadding; + } + return Object.keys(options).length === 0 ? null : options; + }; SandboxCipher.prototype.update = function update(data, inputEncoding, outputEncoding) { + if (this._finalized) { + throw new Error('Attempting to call update() after final()'); + } var buf; if (typeof data === 'string') { buf = Buffer.from(data, inputEncoding || 'utf8'); } else { - buf = Buffer.from(data); + buf = normalizeByteSource(data, 'data'); } - if (_useSessionCipher) { + if (!this._bufferedMode) { + this._ensureSession(); var resultBase64 = _cryptoCipherivUpdate.applySync(undefined, [this._sessionId, buf.toString('base64')]); var resultBuffer = Buffer.from(resultBase64, 'base64'); - if (outputEncoding && outputEncoding !== 'buffer') return resultBuffer.toString(outputEncoding); - return resultBuffer; + return encodeCryptoResult(resultBuffer, outputEncoding); } this._chunks.push(buf); - if (outputEncoding && outputEncoding !== 'buffer') return ''; - return Buffer.alloc(0); + return encodeCryptoResult(Buffer.alloc(0), outputEncoding); }; SandboxCipher.prototype.final = function final(outputEncoding) { if (this._finalized) throw new Error('Attempting to call final() after already finalized'); this._finalized = true; var parsed; - if (_useSessionCipher) { + if (!this._bufferedMode) { + this._ensureSession(); var resultJson = _cryptoCipherivFinal.applySync(undefined, [this._sessionId]); parsed = JSON.parse(resultJson); } else { @@ -646,8 +909,9 @@ var resultJson2 = _cryptoCipheriv.applySync(undefined, [ this._algorithm, this._key.toString('base64'), - this._iv.toString('base64'), + this._iv === null ? null : this._iv.toString('base64'), combined.toString('base64'), + serializeCipherBridgeOptions(this._getBridgeOptions()), ]); parsed = JSON.parse(resultJson2); } @@ -655,72 +919,140 @@ this._authTag = Buffer.from(parsed.authTag, 'base64'); } var resultBuffer = Buffer.from(parsed.data, 'base64'); - if (outputEncoding && outputEncoding !== 'buffer') return resultBuffer.toString(outputEncoding); - return resultBuffer; + return encodeCryptoResult(resultBuffer, outputEncoding); }; SandboxCipher.prototype.getAuthTag = function getAuthTag() { if (!this._finalized) throw new Error('Cannot call getAuthTag before final()'); - if (!this._authTag) throw new Error('Auth tag is only available for GCM ciphers'); + if (!this._authTag) throw new Error('Auth tag is not available'); return this._authTag; }; - SandboxCipher.prototype.setAAD = function setAAD() { return this; }; - SandboxCipher.prototype.setAutoPadding = function setAutoPadding() { return this; }; - result.createCipheriv = function createCipheriv(algorithm, key, iv) { - return new SandboxCipher(algorithm, key, iv); + SandboxCipher.prototype.setAAD = function setAAD(aad, options) { + this._bufferedMode = true; + this._aad = normalizeByteSource(aad, 'buffer'); + this._aadOptions = options; + return this; + }; + SandboxCipher.prototype.setAutoPadding = function setAutoPadding(autoPadding) { + this._bufferedMode = true; + this._autoPadding = autoPadding !== false; + return this; + }; + SandboxCipher.prototype._transform = function _transform(chunk, encoding, callback) { + try { + var output = this.update(chunk, encoding === 'buffer' ? undefined : encoding); + if (output.length) { + this.push(output); + } + callback(); + } catch (error) { + callback(normalizeCryptoBridgeError(error)); + } + }; + SandboxCipher.prototype._flush = function _flush(callback) { + try { + var output = this.final(); + if (output.length) { + this.push(output); + } + callback(); + } catch (error) { + callback(normalizeCryptoBridgeError(error)); + } + }; + result.createCipheriv = function createCipheriv(algorithm, key, iv, options) { + return new SandboxCipher(algorithm, key, iv, options); }; result.Cipheriv = SandboxCipher; } if (typeof _cryptoDecipheriv !== 'undefined') { - function SandboxDecipher(algorithm, key, iv) { + function SandboxDecipher(algorithm, key, iv, options) { + if (!(this instanceof SandboxDecipher)) { + return new SandboxDecipher(algorithm, key, iv, options); + } + if (typeof algorithm !== 'string') { + throw createInvalidArgTypeError('cipher', 'of type string', algorithm); + } + _Transform.call(this); this._algorithm = algorithm; - this._key = typeof key === 'string' ? Buffer.from(key, 'utf8') : Buffer.from(key); - this._iv = typeof iv === 'string' ? Buffer.from(iv, 'utf8') : Buffer.from(iv); + this._key = normalizeByteSource(key, 'key'); + this._iv = normalizeByteSource(iv, 'iv', { allowNull: true }); + this._options = options || undefined; this._authTag = null; this._finalized = false; this._sessionCreated = false; - if (!_useSessionCipher) { - this._chunks = []; + this._aad = null; + this._aadOptions = undefined; + this._autoPadding = undefined; + this._chunks = []; + this._bufferedMode = !_useSessionCipher || !!options; + if (!this._bufferedMode) { + this._ensureSession(); + } else if (!options) { + _cryptoDecipheriv.applySync(undefined, [ + this._algorithm, + this._key.toString('base64'), + this._iv === null ? null : this._iv.toString('base64'), + '', + serializeCipherBridgeOptions({ validateOnly: true }), + ]); } } + _inherits(SandboxDecipher, _Transform); SandboxDecipher.prototype._ensureSession = function _ensureSession() { - if (_useSessionCipher && !this._sessionCreated) { + if (!this._bufferedMode && !this._sessionCreated) { this._sessionCreated = true; - var options = {}; - if (this._authTag) { - options.authTag = this._authTag.toString('base64'); - } this._sessionId = _cryptoCipherivCreate.applySync(undefined, [ 'decipher', this._algorithm, this._key.toString('base64'), - this._iv.toString('base64'), - JSON.stringify(options), + this._iv === null ? null : this._iv.toString('base64'), + serializeCipherBridgeOptions(this._getBridgeOptions()), ]); } }; + SandboxDecipher.prototype._getBridgeOptions = function _getBridgeOptions() { + var options = {}; + if (this._options && this._options.authTagLength !== undefined) { + options.authTagLength = this._options.authTagLength; + } + if (this._authTag) { + options.authTag = this._authTag; + } + if (this._aad) { + options.aad = this._aad; + } + if (this._aadOptions !== undefined) { + options.aadOptions = this._aadOptions; + } + if (this._autoPadding !== undefined) { + options.autoPadding = this._autoPadding; + } + return Object.keys(options).length === 0 ? null : options; + }; SandboxDecipher.prototype.update = function update(data, inputEncoding, outputEncoding) { + if (this._finalized) { + throw new Error('Attempting to call update() after final()'); + } var buf; if (typeof data === 'string') { buf = Buffer.from(data, inputEncoding || 'utf8'); } else { - buf = Buffer.from(data); + buf = normalizeByteSource(data, 'data'); } - if (_useSessionCipher) { + if (!this._bufferedMode) { this._ensureSession(); var resultBase64 = _cryptoCipherivUpdate.applySync(undefined, [this._sessionId, buf.toString('base64')]); var resultBuffer = Buffer.from(resultBase64, 'base64'); - if (outputEncoding && outputEncoding !== 'buffer') return resultBuffer.toString(outputEncoding); - return resultBuffer; + return encodeCryptoResult(resultBuffer, outputEncoding); } this._chunks.push(buf); - if (outputEncoding && outputEncoding !== 'buffer') return ''; - return Buffer.alloc(0); + return encodeCryptoResult(Buffer.alloc(0), outputEncoding); }; SandboxDecipher.prototype.final = function final(outputEncoding) { if (this._finalized) throw new Error('Attempting to call final() after already finalized'); this._finalized = true; var resultBuffer; - if (_useSessionCipher) { + if (!this._bufferedMode) { this._ensureSession(); var resultJson = _cryptoCipherivFinal.applySync(undefined, [this._sessionId]); var parsed = JSON.parse(resultJson); @@ -728,29 +1060,57 @@ } else { var combined = Buffer.concat(this._chunks); var options = {}; - if (this._authTag) { - options.authTag = this._authTag.toString('base64'); - } var resultBase64 = _cryptoDecipheriv.applySync(undefined, [ this._algorithm, this._key.toString('base64'), - this._iv.toString('base64'), + this._iv === null ? null : this._iv.toString('base64'), combined.toString('base64'), - JSON.stringify(options), + serializeCipherBridgeOptions(this._getBridgeOptions()), ]); resultBuffer = Buffer.from(resultBase64, 'base64'); } - if (outputEncoding && outputEncoding !== 'buffer') return resultBuffer.toString(outputEncoding); - return resultBuffer; + return encodeCryptoResult(resultBuffer, outputEncoding); }; SandboxDecipher.prototype.setAuthTag = function setAuthTag(tag) { - this._authTag = typeof tag === 'string' ? Buffer.from(tag, 'base64') : Buffer.from(tag); + this._bufferedMode = true; + this._authTag = typeof tag === 'string' ? Buffer.from(tag, 'base64') : normalizeByteSource(tag, 'buffer'); + return this; + }; + SandboxDecipher.prototype.setAAD = function setAAD(aad, options) { + this._bufferedMode = true; + this._aad = normalizeByteSource(aad, 'buffer'); + this._aadOptions = options; + return this; + }; + SandboxDecipher.prototype.setAutoPadding = function setAutoPadding(autoPadding) { + this._bufferedMode = true; + this._autoPadding = autoPadding !== false; return this; }; - SandboxDecipher.prototype.setAAD = function setAAD() { return this; }; - SandboxDecipher.prototype.setAutoPadding = function setAutoPadding() { return this; }; - result.createDecipheriv = function createDecipheriv(algorithm, key, iv) { - return new SandboxDecipher(algorithm, key, iv); + SandboxDecipher.prototype._transform = function _transform(chunk, encoding, callback) { + try { + var output = this.update(chunk, encoding === 'buffer' ? undefined : encoding); + if (output.length) { + this.push(output); + } + callback(); + } catch (error) { + callback(normalizeCryptoBridgeError(error)); + } + }; + SandboxDecipher.prototype._flush = function _flush(callback) { + try { + var output = this.final(); + if (output.length) { + this.push(output); + } + callback(); + } catch (error) { + callback(normalizeCryptoBridgeError(error)); + } + }; + result.createDecipheriv = function createDecipheriv(algorithm, key, iv, options) { + return new SandboxDecipher(algorithm, key, iv, options); }; result.Decipheriv = SandboxDecipher; } @@ -759,21 +1119,16 @@ if (typeof _cryptoSign !== 'undefined') { result.sign = function sign(algorithm, data, key) { var dataBuf = typeof data === 'string' ? Buffer.from(data, 'utf8') : Buffer.from(data); - var keyPem; - if (typeof key === 'string') { - keyPem = key; - } else if (key && typeof key === 'object' && key._pem) { - keyPem = key._pem; - } else if (Buffer.isBuffer(key)) { - keyPem = key.toString('utf8'); - } else { - keyPem = String(key); + var sigBase64; + try { + sigBase64 = _cryptoSign.applySync(undefined, [ + algorithm === undefined ? null : algorithm, + dataBuf.toString('base64'), + JSON.stringify(serializeBridgeValue(key)), + ]); + } catch (error) { + throw normalizeCryptoBridgeError(error); } - var sigBase64 = _cryptoSign.applySync(undefined, [ - algorithm, - dataBuf.toString('base64'), - keyPem, - ]); return Buffer.from(sigBase64, 'base64'); }; } @@ -781,139 +1136,912 @@ if (typeof _cryptoVerify !== 'undefined') { result.verify = function verify(algorithm, data, key, signature) { var dataBuf = typeof data === 'string' ? Buffer.from(data, 'utf8') : Buffer.from(data); - var keyPem; - if (typeof key === 'string') { - keyPem = key; - } else if (key && typeof key === 'object' && key._pem) { - keyPem = key._pem; - } else if (Buffer.isBuffer(key)) { - keyPem = key.toString('utf8'); - } else { - keyPem = String(key); - } var sigBuf = typeof signature === 'string' ? Buffer.from(signature, 'base64') : Buffer.from(signature); - return _cryptoVerify.applySync(undefined, [ - algorithm, - dataBuf.toString('base64'), - keyPem, - sigBuf.toString('base64'), + try { + return _cryptoVerify.applySync(undefined, [ + algorithm === undefined ? null : algorithm, + dataBuf.toString('base64'), + JSON.stringify(serializeBridgeValue(key)), + sigBuf.toString('base64'), + ]); + } catch (error) { + throw normalizeCryptoBridgeError(error); + } + }; + } + + if (typeof _cryptoAsymmetricOp !== 'undefined') { + function asymmetricBridgeCall(operation, key, data) { + var dataBuf = toRawBuffer(data); + var resultBase64; + try { + resultBase64 = _cryptoAsymmetricOp.applySync(undefined, [ + operation, + JSON.stringify(serializeBridgeValue(key)), + dataBuf.toString('base64'), + ]); + } catch (error) { + throw normalizeCryptoBridgeError(error); + } + return Buffer.from(resultBase64, 'base64'); + } + + result.publicEncrypt = function publicEncrypt(key, data) { + return asymmetricBridgeCall('publicEncrypt', key, data); + }; + + result.privateDecrypt = function privateDecrypt(key, data) { + return asymmetricBridgeCall('privateDecrypt', key, data); + }; + + result.privateEncrypt = function privateEncrypt(key, data) { + return asymmetricBridgeCall('privateEncrypt', key, data); + }; + + result.publicDecrypt = function publicDecrypt(key, data) { + return asymmetricBridgeCall('publicDecrypt', key, data); + }; + } + + if ( + typeof _cryptoDiffieHellmanSessionCreate !== 'undefined' && + typeof _cryptoDiffieHellmanSessionCall !== 'undefined' + ) { + function serializeDhKeyObject(value) { + if (value.type === 'secret') { + return { + type: 'secret', + raw: Buffer.from(value.export()).toString('base64'), + }; + } + return { + type: value.type, + pem: value._pem || value.export({ + type: value.type === 'private' ? 'pkcs8' : 'spki', + format: 'pem', + }), + }; + } + + function serializeDhValue(value) { + if ( + value === null || + typeof value === 'string' || + typeof value === 'number' || + typeof value === 'boolean' + ) { + return value; + } + if (Buffer.isBuffer(value)) { + return { + __type: 'buffer', + value: Buffer.from(value).toString('base64'), + }; + } + if (value instanceof ArrayBuffer) { + return { + __type: 'buffer', + value: Buffer.from(new Uint8Array(value)).toString('base64'), + }; + } + if (ArrayBuffer.isView(value)) { + return { + __type: 'buffer', + value: Buffer.from(value.buffer, value.byteOffset, value.byteLength).toString('base64'), + }; + } + if (typeof value === 'bigint') { + return { + __type: 'bigint', + value: value.toString(), + }; + } + if ( + value && + typeof value === 'object' && + (value.type === 'public' || value.type === 'private' || value.type === 'secret') && + typeof value.export === 'function' + ) { + return { + __type: 'keyObject', + value: serializeDhKeyObject(value), + }; + } + if (Array.isArray(value)) { + return value.map(serializeDhValue); + } + if (value && typeof value === 'object') { + var output = {}; + var keys = Object.keys(value); + for (var i = 0; i < keys.length; i++) { + if (value[keys[i]] !== undefined) { + output[keys[i]] = serializeDhValue(value[keys[i]]); + } + } + return output; + } + return String(value); + } + + function restoreDhValue(value) { + if (!value || typeof value !== 'object') { + return value; + } + if (value.__type === 'buffer') { + return Buffer.from(value.value, 'base64'); + } + if (value.__type === 'bigint') { + return BigInt(value.value); + } + if (Array.isArray(value)) { + return value.map(restoreDhValue); + } + var output = {}; + var keys = Object.keys(value); + for (var i = 0; i < keys.length; i++) { + output[keys[i]] = restoreDhValue(value[keys[i]]); + } + return output; + } + + function createDhSession(type, name, argsLike) { + var args = []; + for (var i = 0; i < argsLike.length; i++) { + args.push(serializeDhValue(argsLike[i])); + } + return _cryptoDiffieHellmanSessionCreate.applySync(undefined, [ + JSON.stringify({ + type: type, + name: name, + args: args, + }), ]); + } + + function callDhSession(sessionId, method, argsLike) { + var args = []; + for (var i = 0; i < argsLike.length; i++) { + args.push(serializeDhValue(argsLike[i])); + } + var response = JSON.parse(_cryptoDiffieHellmanSessionCall.applySync(undefined, [ + sessionId, + JSON.stringify({ + method: method, + args: args, + }), + ])); + if (response && response.hasResult === false) { + return undefined; + } + return restoreDhValue(response && response.result); + } + + function SandboxDiffieHellman(sessionId) { + this._sessionId = sessionId; + } + + Object.defineProperty(SandboxDiffieHellman.prototype, 'verifyError', { + get: function getVerifyError() { + return callDhSession(this._sessionId, 'verifyError', []); + }, + }); + + SandboxDiffieHellman.prototype.generateKeys = function generateKeys(encoding) { + if (arguments.length === 0) return callDhSession(this._sessionId, 'generateKeys', []); + return callDhSession(this._sessionId, 'generateKeys', [encoding]); + }; + SandboxDiffieHellman.prototype.computeSecret = function computeSecret(key, inputEncoding, outputEncoding) { + return callDhSession(this._sessionId, 'computeSecret', Array.prototype.slice.call(arguments)); + }; + SandboxDiffieHellman.prototype.getPrime = function getPrime(encoding) { + if (arguments.length === 0) return callDhSession(this._sessionId, 'getPrime', []); + return callDhSession(this._sessionId, 'getPrime', [encoding]); + }; + SandboxDiffieHellman.prototype.getGenerator = function getGenerator(encoding) { + if (arguments.length === 0) return callDhSession(this._sessionId, 'getGenerator', []); + return callDhSession(this._sessionId, 'getGenerator', [encoding]); + }; + SandboxDiffieHellman.prototype.getPublicKey = function getPublicKey(encoding) { + if (arguments.length === 0) return callDhSession(this._sessionId, 'getPublicKey', []); + return callDhSession(this._sessionId, 'getPublicKey', [encoding]); }; + SandboxDiffieHellman.prototype.getPrivateKey = function getPrivateKey(encoding) { + if (arguments.length === 0) return callDhSession(this._sessionId, 'getPrivateKey', []); + return callDhSession(this._sessionId, 'getPrivateKey', [encoding]); + }; + SandboxDiffieHellman.prototype.setPublicKey = function setPublicKey(key, encoding) { + return callDhSession(this._sessionId, 'setPublicKey', Array.prototype.slice.call(arguments)); + }; + SandboxDiffieHellman.prototype.setPrivateKey = function setPrivateKey(key, encoding) { + return callDhSession(this._sessionId, 'setPrivateKey', Array.prototype.slice.call(arguments)); + }; + + function SandboxECDH(sessionId) { + SandboxDiffieHellman.call(this, sessionId); + } + SandboxECDH.prototype = Object.create(SandboxDiffieHellman.prototype); + SandboxECDH.prototype.constructor = SandboxECDH; + SandboxECDH.prototype.getPublicKey = function getPublicKey(encoding, format) { + return callDhSession(this._sessionId, 'getPublicKey', Array.prototype.slice.call(arguments)); + }; + + result.createDiffieHellman = function createDiffieHellman() { + return new SandboxDiffieHellman(createDhSession('dh', undefined, arguments)); + }; + + result.getDiffieHellman = function getDiffieHellman(name) { + return new SandboxDiffieHellman(createDhSession('group', name, [])); + }; + + result.createDiffieHellmanGroup = result.getDiffieHellman; + + result.createECDH = function createECDH(curve) { + return new SandboxECDH(createDhSession('ecdh', curve, [])); + }; + + if (typeof _cryptoDiffieHellman !== 'undefined') { + result.diffieHellman = function diffieHellman(options) { + var resultJson = _cryptoDiffieHellman.applySync(undefined, [ + JSON.stringify(serializeDhValue(options)), + ]); + return restoreDhValue(JSON.parse(resultJson)); + }; + } + + result.DiffieHellman = SandboxDiffieHellman; + result.DiffieHellmanGroup = SandboxDiffieHellman; + result.ECDH = SandboxECDH; } // Overlay host-backed generateKeyPairSync/generateKeyPair and KeyObject helpers if (typeof _cryptoGenerateKeyPairSync !== 'undefined') { - function SandboxKeyObject(type, pem) { + function restoreBridgeValue(value) { + if (!value || typeof value !== 'object') { + return value; + } + if (value.__type === 'buffer') { + return Buffer.from(value.value, 'base64'); + } + if (value.__type === 'bigint') { + return BigInt(value.value); + } + if (Array.isArray(value)) { + return value.map(restoreBridgeValue); + } + var output = {}; + var keys = Object.keys(value); + for (var i = 0; i < keys.length; i++) { + output[keys[i]] = restoreBridgeValue(value[keys[i]]); + } + return output; + } + + function cloneObject(value) { + if (!value || typeof value !== 'object') { + return value; + } + if (Array.isArray(value)) { + return value.map(cloneObject); + } + var output = {}; + var keys = Object.keys(value); + for (var i = 0; i < keys.length; i++) { + output[keys[i]] = cloneObject(value[keys[i]]); + } + return output; + } + + function createDomException(message, name) { + if (typeof DOMException === 'function') { + return new DOMException(message, name); + } + var error = new Error(message); + error.name = name; + return error; + } + + function toRawBuffer(data, encoding) { + if (Buffer.isBuffer(data)) { + return Buffer.from(data); + } + if (data instanceof ArrayBuffer) { + return Buffer.from(new Uint8Array(data)); + } + if (ArrayBuffer.isView(data)) { + return Buffer.from(data.buffer, data.byteOffset, data.byteLength); + } + if (typeof data === 'string') { + return Buffer.from(data, encoding || 'utf8'); + } + return Buffer.from(data); + } + + function serializeBridgeValue(value) { + if (value === null) { + return null; + } + if ( + typeof value === 'string' || + typeof value === 'number' || + typeof value === 'boolean' + ) { + return value; + } + if (typeof value === 'bigint') { + return { + __type: 'bigint', + value: value.toString(), + }; + } + if (Buffer.isBuffer(value)) { + return { + __type: 'buffer', + value: Buffer.from(value).toString('base64'), + }; + } + if (value instanceof ArrayBuffer) { + return { + __type: 'buffer', + value: Buffer.from(new Uint8Array(value)).toString('base64'), + }; + } + if (ArrayBuffer.isView(value)) { + return { + __type: 'buffer', + value: Buffer.from(value.buffer, value.byteOffset, value.byteLength).toString('base64'), + }; + } + if (Array.isArray(value)) { + return value.map(serializeBridgeValue); + } + if ( + value && + typeof value === 'object' && + (value.type === 'public' || value.type === 'private' || value.type === 'secret') && + typeof value.export === 'function' + ) { + if (value.type === 'secret') { + return { + __type: 'keyObject', + value: { + type: 'secret', + raw: Buffer.from(value.export()).toString('base64'), + }, + }; + } + return { + __type: 'keyObject', + value: { + type: value.type, + pem: value._pem, + }, + }; + } + if (value && typeof value === 'object') { + var output = {}; + var keys = Object.keys(value); + for (var i = 0; i < keys.length; i++) { + var entry = value[keys[i]]; + if (entry !== undefined) { + output[keys[i]] = serializeBridgeValue(entry); + } + } + return output; + } + return String(value); + } + + function normalizeCryptoBridgeError(error) { + if (!error || typeof error !== 'object') { + return error; + } + if ( + error.code === undefined && + error.message === 'error:07880109:common libcrypto routines::interrupted or cancelled' + ) { + error.code = 'ERR_OSSL_CRYPTO_INTERRUPTED_OR_CANCELLED'; + } + return error; + } + + function deserializeGeneratedKeyValue(value) { + if (!value || typeof value !== 'object') { + return value; + } + if (value.kind === 'string') { + return value.value; + } + if (value.kind === 'buffer') { + return Buffer.from(value.value, 'base64'); + } + if (value.kind === 'keyObject') { + return createGeneratedKeyObject(value.value); + } + if (value.kind === 'object') { + return value.value; + } + return value; + } + + function serializeBridgeOptions(options) { + return JSON.stringify({ + hasOptions: options !== undefined, + options: options === undefined ? null : serializeBridgeValue(options), + }); + } + + function createInvalidArgTypeError(name, expected, value) { + var received; + if (value == null) { + received = ' Received ' + value; + } else if (typeof value === 'function') { + received = ' Received function ' + (value.name || 'anonymous'); + } else if (typeof value === 'object') { + if (value.constructor && value.constructor.name) { + received = ' Received an instance of ' + value.constructor.name; + } else { + received = ' Received [object Object]'; + } + } else { + var inspected = typeof value === 'string' ? "'" + value + "'" : String(value); + if (inspected.length > 28) { + inspected = inspected.slice(0, 25) + '...'; + } + received = ' Received type ' + typeof value + ' (' + inspected + ')'; + } + var error = new TypeError('The "' + name + '" argument must be ' + expected + '.' + received); + error.code = 'ERR_INVALID_ARG_TYPE'; + return error; + } + + function scheduleCryptoCallback(callback, args) { + var invoke = function() { + callback.apply(undefined, args); + }; + if (typeof process !== 'undefined' && process && typeof process.nextTick === 'function') { + process.nextTick(invoke); + return; + } + if (typeof queueMicrotask === 'function') { + queueMicrotask(invoke); + return; + } + Promise.resolve().then(invoke); + } + + function shouldThrowCryptoValidationError(error) { + if (!error || typeof error !== 'object') { + return false; + } + if (error.name === 'TypeError' || error.name === 'RangeError') { + return true; + } + var code = error.code; + return code === 'ERR_MISSING_OPTION' || + code === 'ERR_CRYPTO_UNKNOWN_DH_GROUP' || + code === 'ERR_OUT_OF_RANGE' || + (typeof code === 'string' && code.indexOf('ERR_INVALID_ARG_') === 0); + } + + function ensureCryptoCallback(callback, syncValidator) { + if (typeof callback === 'function') { + return callback; + } + if (typeof syncValidator === 'function') { + syncValidator(); + } + throw createInvalidArgTypeError('callback', 'of type function', callback); + } + + function SandboxKeyObject(type, handle) { this.type = type; - this._pem = pem; + this._pem = handle && handle.pem !== undefined ? handle.pem : undefined; + this._raw = handle && handle.raw !== undefined ? handle.raw : undefined; + this._jwk = handle && handle.jwk !== undefined ? cloneObject(handle.jwk) : undefined; + this.asymmetricKeyType = handle && handle.asymmetricKeyType !== undefined ? handle.asymmetricKeyType : undefined; + this.asymmetricKeyDetails = handle && handle.asymmetricKeyDetails !== undefined ? + restoreBridgeValue(handle.asymmetricKeyDetails) : + undefined; + this.symmetricKeySize = type === 'secret' && handle && handle.raw !== undefined ? + Buffer.from(handle.raw, 'base64').byteLength : + undefined; } + + Object.defineProperty(SandboxKeyObject.prototype, Symbol.toStringTag, { + value: 'KeyObject', + configurable: true, + }); + SandboxKeyObject.prototype.export = function exportKey(options) { - if (!options || options.format === 'pem') { - return this._pem; + if (this.type === 'secret') { + return Buffer.from(this._raw || '', 'base64'); + } + if (!options || typeof options !== 'object') { + throw new TypeError('The "options" argument must be of type object.'); + } + if (options.format === 'jwk') { + return cloneObject(this._jwk); } if (options.format === 'der') { - // Strip PEM header/footer and decode base64 - var lines = this._pem.split('\n').filter(function(l) { return l && l.indexOf('-----') !== 0; }); + var lines = String(this._pem || '').split('\n').filter(function(l) { + return l && l.indexOf('-----') !== 0; + }); return Buffer.from(lines.join(''), 'base64'); } return this._pem; }; - SandboxKeyObject.prototype.toString = function() { return this._pem; }; - result.generateKeyPairSync = function generateKeyPairSync(type, options) { - var opts = {}; - if (options) { - if (options.modulusLength !== undefined) opts.modulusLength = options.modulusLength; - if (options.publicExponent !== undefined) opts.publicExponent = options.publicExponent; - if (options.namedCurve !== undefined) opts.namedCurve = options.namedCurve; - if (options.divisorLength !== undefined) opts.divisorLength = options.divisorLength; - if (options.primeLength !== undefined) opts.primeLength = options.primeLength; - } - var resultJson = _cryptoGenerateKeyPairSync.applySync(undefined, [ - type, - JSON.stringify(opts), - ]); - var parsed = JSON.parse(resultJson); + SandboxKeyObject.prototype.toString = function() { + return '[object KeyObject]'; + }; - // Return KeyObjects if no encoding specified, PEM strings otherwise - if (options && options.publicKeyEncoding && options.privateKeyEncoding) { - return { publicKey: parsed.publicKey, privateKey: parsed.privateKey }; + SandboxKeyObject.prototype.equals = function equals(other) { + if (!(other instanceof SandboxKeyObject)) { + return false; } - return { - publicKey: new SandboxKeyObject('public', parsed.publicKey), - privateKey: new SandboxKeyObject('private', parsed.privateKey), - }; + if (this.type !== other.type) { + return false; + } + if (this.type === 'secret') { + return (this._raw || '') === (other._raw || ''); + } + return ( + (this._pem || '') === (other._pem || '') && + this.asymmetricKeyType === other.asymmetricKeyType + ); }; - result.generateKeyPair = function generateKeyPair(type, options, callback) { - try { - var pair = result.generateKeyPairSync(type, options); - callback(null, pair.publicKey, pair.privateKey); - } catch (e) { - callback(e); + function normalizeNamedCurve(namedCurve) { + if (!namedCurve) { + return namedCurve; } - }; + var upper = String(namedCurve).toUpperCase(); + if (upper === 'PRIME256V1' || upper === 'SECP256R1') return 'P-256'; + if (upper === 'SECP384R1') return 'P-384'; + if (upper === 'SECP521R1') return 'P-521'; + return namedCurve; + } - result.createPublicKey = function createPublicKey(key) { - if (typeof key === 'string') { - if (key.indexOf('-----BEGIN') === -1) { - throw new TypeError('error:0900006e:PEM routines:OPENSSL_internal:NO_START_LINE'); - } - return new SandboxKeyObject('public', key); + function normalizeAlgorithmInput(algorithm) { + if (typeof algorithm === 'string') { + return { name: algorithm }; } - if (key && typeof key === 'object' && key._pem) { - return new SandboxKeyObject('public', key._pem); + return Object.assign({}, algorithm); + } + + function createCompatibleCryptoKey(keyData) { + var key; + if ( + globalThis.CryptoKey && + globalThis.CryptoKey.prototype && + globalThis.CryptoKey.prototype !== SandboxCryptoKey.prototype + ) { + key = Object.create(globalThis.CryptoKey.prototype); + key.type = keyData.type; + key.extractable = keyData.extractable; + key.algorithm = keyData.algorithm; + key.usages = keyData.usages; + key._keyData = keyData; + key._pem = keyData._pem; + key._jwk = keyData._jwk; + key._raw = keyData._raw; + key._sourceKeyObjectData = keyData._sourceKeyObjectData; + return key; } - if (key && typeof key === 'object' && key.type === 'private') { - // Node.js createPublicKey accepts private KeyObjects and extracts public key - return new SandboxKeyObject('public', key._pem); + return new SandboxCryptoKey(keyData); + } + + function buildCryptoKeyFromKeyObject(keyObject, algorithm, extractable, usages) { + var algo = normalizeAlgorithmInput(algorithm); + var name = algo.name; + + if (keyObject.type === 'secret') { + var secretBytes = Buffer.from(keyObject._raw || '', 'base64'); + if (name === 'PBKDF2') { + if (extractable) { + throw new SyntaxError('PBKDF2 keys are not extractable'); + } + if (usages.some(function(usage) { return usage !== 'deriveBits' && usage !== 'deriveKey'; })) { + throw new SyntaxError('Unsupported key usage for a PBKDF2 key'); + } + return createCompatibleCryptoKey({ + type: 'secret', + extractable: extractable, + algorithm: { name: name }, + usages: Array.from(usages), + _raw: keyObject._raw, + _sourceKeyObjectData: { + type: 'secret', + raw: keyObject._raw, + }, + }); + } + if (name === 'HMAC') { + if (!secretBytes.byteLength || algo.length === 0) { + throw createDomException('Zero-length key is not supported', 'DataError'); + } + if (!usages.length) { + throw new SyntaxError('Usages cannot be empty when importing a secret key.'); + } + return createCompatibleCryptoKey({ + type: 'secret', + extractable: extractable, + algorithm: { + name: name, + hash: typeof algo.hash === 'string' ? { name: algo.hash } : cloneObject(algo.hash), + length: secretBytes.byteLength * 8, + }, + usages: Array.from(usages), + _raw: keyObject._raw, + _sourceKeyObjectData: { + type: 'secret', + raw: keyObject._raw, + }, + }); + } + return createCompatibleCryptoKey({ + type: 'secret', + extractable: extractable, + algorithm: { + name: name, + length: secretBytes.byteLength * 8, + }, + usages: Array.from(usages), + _raw: keyObject._raw, + _sourceKeyObjectData: { + type: 'secret', + raw: keyObject._raw, + }, + }); } - if (key && typeof key === 'object' && key.key) { - var keyData = typeof key.key === 'string' ? key.key : key.key.toString('utf8'); - return new SandboxKeyObject('public', keyData); + + var keyType = String(keyObject.asymmetricKeyType || '').toLowerCase(); + var algorithmName = String(name || ''); + + if ( + (keyType === 'ed25519' || keyType === 'ed448' || keyType === 'x25519' || keyType === 'x448') && + keyType !== algorithmName.toLowerCase() + ) { + throw createDomException('Invalid key type', 'DataError'); } - if (Buffer.isBuffer(key)) { - var keyStr = key.toString('utf8'); - if (keyStr.indexOf('-----BEGIN') === -1) { - throw new TypeError('error:0900006e:PEM routines:OPENSSL_internal:NO_START_LINE'); + + if (algorithmName === 'ECDH') { + if (keyObject.type === 'private' && !usages.length) { + throw new SyntaxError('Usages cannot be empty when importing a private key.'); + } + var actualCurve = normalizeNamedCurve( + keyObject.asymmetricKeyDetails && keyObject.asymmetricKeyDetails.namedCurve + ); + if ( + algo.namedCurve && + actualCurve && + normalizeNamedCurve(algo.namedCurve) !== actualCurve + ) { + throw createDomException('Named curve mismatch', 'DataError'); } - return new SandboxKeyObject('public', keyStr); } - return new SandboxKeyObject('public', String(key)); + + var normalizedAlgo = cloneObject(algo); + if (typeof normalizedAlgo.hash === 'string') { + normalizedAlgo.hash = { name: normalizedAlgo.hash }; + } + + return createCompatibleCryptoKey({ + type: keyObject.type, + extractable: extractable, + algorithm: normalizedAlgo, + usages: Array.from(usages), + _pem: keyObject._pem, + _jwk: cloneObject(keyObject._jwk), + _sourceKeyObjectData: { + type: keyObject.type, + pem: keyObject._pem, + jwk: cloneObject(keyObject._jwk), + asymmetricKeyType: keyObject.asymmetricKeyType, + asymmetricKeyDetails: cloneObject(keyObject.asymmetricKeyDetails), + }, + }); + } + + SandboxKeyObject.prototype.toCryptoKey = function toCryptoKey(algorithm, extractable, usages) { + return buildCryptoKeyFromKeyObject(this, algorithm, extractable, Array.from(usages || [])); }; - result.createPrivateKey = function createPrivateKey(key) { + function createAsymmetricKeyObject(type, key) { if (typeof key === 'string') { if (key.indexOf('-----BEGIN') === -1) { throw new TypeError('error:0900006e:PEM routines:OPENSSL_internal:NO_START_LINE'); } - return new SandboxKeyObject('private', key); + return new SandboxKeyObject(type, { pem: key }); } if (key && typeof key === 'object' && key._pem) { - return new SandboxKeyObject('private', key._pem); + return new SandboxKeyObject(type, { + pem: key._pem, + jwk: key._jwk, + asymmetricKeyType: key.asymmetricKeyType, + asymmetricKeyDetails: key.asymmetricKeyDetails, + }); } if (key && typeof key === 'object' && key.key) { var keyData = typeof key.key === 'string' ? key.key : key.key.toString('utf8'); - return new SandboxKeyObject('private', keyData); + return new SandboxKeyObject(type, { pem: keyData }); } if (Buffer.isBuffer(key)) { var keyStr = key.toString('utf8'); if (keyStr.indexOf('-----BEGIN') === -1) { throw new TypeError('error:0900006e:PEM routines:OPENSSL_internal:NO_START_LINE'); } - return new SandboxKeyObject('private', keyStr); + return new SandboxKeyObject(type, { pem: keyStr }); + } + return new SandboxKeyObject(type, { pem: String(key) }); + } + + function createGeneratedKeyObject(value) { + return new SandboxKeyObject(value.type, { + pem: value.pem, + raw: value.raw, + jwk: value.jwk, + asymmetricKeyType: value.asymmetricKeyType, + asymmetricKeyDetails: value.asymmetricKeyDetails, + }); + } + + result.generateKeyPairSync = function generateKeyPairSync(type, options) { + var resultJson = _cryptoGenerateKeyPairSync.applySync(undefined, [ + type, + serializeBridgeOptions(options), + ]); + var parsed = JSON.parse(resultJson); + + if (parsed.publicKey && parsed.publicKey.kind) { + return { + publicKey: deserializeGeneratedKeyValue(parsed.publicKey), + privateKey: deserializeGeneratedKeyValue(parsed.privateKey), + }; } - return new SandboxKeyObject('private', String(key)); + + return { + publicKey: createGeneratedKeyObject(parsed.publicKey), + privateKey: createGeneratedKeyObject(parsed.privateKey), + }; }; - result.createSecretKey = function createSecretKey(key) { - if (typeof key === 'string') { - return new SandboxKeyObject('secret', key); + result.generateKeyPair = function generateKeyPair(type, options, callback) { + if (typeof options === 'function') { + callback = options; + options = undefined; + } + callback = ensureCryptoCallback(callback, function() { + result.generateKeyPairSync(type, options); + }); + try { + var pair = result.generateKeyPairSync(type, options); + scheduleCryptoCallback(callback, [null, pair.publicKey, pair.privateKey]); + } catch (e) { + if (shouldThrowCryptoValidationError(e)) { + throw e; + } + scheduleCryptoCallback(callback, [e]); } - if (Buffer.isBuffer(key) || (key instanceof Uint8Array)) { - return new SandboxKeyObject('secret', Buffer.from(key).toString('utf8')); + }; + + if (typeof _cryptoGenerateKeySync !== 'undefined') { + result.generateKeySync = function generateKeySync(type, options) { + var resultJson; + try { + resultJson = _cryptoGenerateKeySync.applySync(undefined, [ + type, + serializeBridgeOptions(options), + ]); + } catch (error) { + throw normalizeCryptoBridgeError(error); + } + return createGeneratedKeyObject(JSON.parse(resultJson)); + }; + + result.generateKey = function generateKey(type, options, callback) { + callback = ensureCryptoCallback(callback, function() { + result.generateKeySync(type, options); + }); + try { + var key = result.generateKeySync(type, options); + scheduleCryptoCallback(callback, [null, key]); + } catch (e) { + if (shouldThrowCryptoValidationError(e)) { + throw e; + } + scheduleCryptoCallback(callback, [e]); + } + }; + } + + if (typeof _cryptoGeneratePrimeSync !== 'undefined') { + result.generatePrimeSync = function generatePrimeSync(size, options) { + var resultJson; + try { + resultJson = _cryptoGeneratePrimeSync.applySync(undefined, [ + size, + serializeBridgeOptions(options), + ]); + } catch (error) { + throw normalizeCryptoBridgeError(error); + } + return restoreBridgeValue(JSON.parse(resultJson)); + }; + + result.generatePrime = function generatePrime(size, options, callback) { + if (typeof options === 'function') { + callback = options; + options = undefined; + } + callback = ensureCryptoCallback(callback, function() { + result.generatePrimeSync(size, options); + }); + try { + var prime = result.generatePrimeSync(size, options); + scheduleCryptoCallback(callback, [null, prime]); + } catch (e) { + if (shouldThrowCryptoValidationError(e)) { + throw e; + } + scheduleCryptoCallback(callback, [e]); + } + }; + } + + result.createPublicKey = function createPublicKey(key) { + if (typeof _cryptoCreateKeyObject !== 'undefined') { + var resultJson; + try { + resultJson = _cryptoCreateKeyObject.applySync(undefined, [ + 'createPublicKey', + JSON.stringify(serializeBridgeValue(key)), + ]); + } catch (error) { + throw normalizeCryptoBridgeError(error); + } + return createGeneratedKeyObject(JSON.parse(resultJson)); } - return new SandboxKeyObject('secret', String(key)); + return createAsymmetricKeyObject('public', key); + }; + + result.createPrivateKey = function createPrivateKey(key) { + if (typeof _cryptoCreateKeyObject !== 'undefined') { + var resultJson; + try { + resultJson = _cryptoCreateKeyObject.applySync(undefined, [ + 'createPrivateKey', + JSON.stringify(serializeBridgeValue(key)), + ]); + } catch (error) { + throw normalizeCryptoBridgeError(error); + } + return createGeneratedKeyObject(JSON.parse(resultJson)); + } + return createAsymmetricKeyObject('private', key); + }; + + result.createSecretKey = function createSecretKey(key, encoding) { + return new SandboxKeyObject('secret', { + raw: toRawBuffer(key, encoding).toString('base64'), + }); + }; + + SandboxKeyObject.from = function from(key) { + if (!key || typeof key !== 'object' || key[Symbol.toStringTag] !== 'CryptoKey') { + throw new TypeError('The "key" argument must be an instance of CryptoKey.'); + } + if (key._sourceKeyObjectData && key._sourceKeyObjectData.type === 'secret') { + return new SandboxKeyObject('secret', { + raw: key._sourceKeyObjectData.raw, + }); + } + return new SandboxKeyObject(key.type, { + pem: key._pem, + jwk: key._jwk, + asymmetricKeyType: key._sourceKeyObjectData && key._sourceKeyObjectData.asymmetricKeyType, + asymmetricKeyDetails: key._sourceKeyObjectData && key._sourceKeyObjectData.asymmetricKeyDetails, + }); }; result.KeyObject = SandboxKeyObject; @@ -927,6 +2055,43 @@ this.algorithm = keyData.algorithm; this.usages = keyData.usages; this._keyData = keyData; + this._pem = keyData._pem; + this._jwk = keyData._jwk; + this._raw = keyData._raw; + this._sourceKeyObjectData = keyData._sourceKeyObjectData; + } + + Object.defineProperty(SandboxCryptoKey.prototype, Symbol.toStringTag, { + value: 'CryptoKey', + configurable: true, + }); + + Object.defineProperty(SandboxCryptoKey, Symbol.hasInstance, { + value: function(candidate) { + return !!( + candidate && + typeof candidate === 'object' && + ( + candidate._keyData || + candidate[Symbol.toStringTag] === 'CryptoKey' + ) + ); + }, + configurable: true, + }); + + if ( + globalThis.CryptoKey && + globalThis.CryptoKey.prototype && + globalThis.CryptoKey.prototype !== SandboxCryptoKey.prototype + ) { + Object.setPrototypeOf(SandboxCryptoKey.prototype, globalThis.CryptoKey.prototype); + } + + if (typeof globalThis.CryptoKey === 'undefined') { + __requireExposeCustomGlobal('CryptoKey', SandboxCryptoKey); + } else if (globalThis.CryptoKey !== SandboxCryptoKey) { + globalThis.CryptoKey = SandboxCryptoKey; } function toBase64(data) { @@ -1116,8 +2281,17 @@ }); }; - result.subtle = SandboxSubtle; - result.webcrypto = { subtle: SandboxSubtle, getRandomValues: result.randomFillSync }; + if ( + globalThis.crypto && + globalThis.crypto.subtle && + typeof globalThis.crypto.subtle.importKey === 'function' + ) { + result.subtle = globalThis.crypto.subtle; + result.webcrypto = globalThis.crypto; + } else { + result.subtle = SandboxSubtle; + result.webcrypto = { subtle: SandboxSubtle, getRandomValues: result.randomFillSync }; + } } // Enumeration functions: getCurves, getCiphers, getHashes. @@ -1156,6 +2330,16 @@ return out === 0; }; } + if (typeof result.getFips !== 'function') { + result.getFips = function getFips() { + return 0; + }; + } + if (typeof result.setFips !== 'function') { + result.setFips = function setFips() { + throw new Error('FIPS mode is not supported in sandbox'); + }; + } return result; } @@ -1478,6 +2662,17 @@ return _httpModule; } + if (name === '_http_agent') { + if (__internalModuleCache['_http_agent']) return __internalModuleCache['_http_agent']; + const httpAgentModule = { + Agent: _httpModule.Agent, + globalAgent: _httpModule.globalAgent, + }; + __internalModuleCache['_http_agent'] = httpAgentModule; + _debugRequire('loaded', name, 'http-agent-special'); + return httpAgentModule; + } + // Special handling for https module if (name === 'https') { if (__internalModuleCache['https']) return __internalModuleCache['https']; diff --git a/packages/core/src/generated/isolate-runtime.ts b/packages/core/src/generated/isolate-runtime.ts index 60404a06..ce9ef270 100644 --- a/packages/core/src/generated/isolate-runtime.ts +++ b/packages/core/src/generated/isolate-runtime.ts @@ -11,7 +11,7 @@ export const ISOLATE_RUNTIME_SOURCES = { "initCommonjsModuleGlobals": "\"use strict\";\n(() => {\n // ../core/isolate-runtime/src/common/global-exposure.ts\n function defineRuntimeGlobalBinding(name, value, mutable) {\n Object.defineProperty(globalThis, name, {\n value,\n writable: mutable,\n configurable: mutable,\n enumerable: true\n });\n }\n function createRuntimeGlobalExposer(mutable) {\n return (name, value) => {\n defineRuntimeGlobalBinding(name, value, mutable);\n };\n }\n function getRuntimeExposeMutableGlobal() {\n if (typeof globalThis.__runtimeExposeMutableGlobal === \"function\") {\n return globalThis.__runtimeExposeMutableGlobal;\n }\n return createRuntimeGlobalExposer(true);\n }\n\n // ../core/isolate-runtime/src/inject/init-commonjs-module-globals.ts\n var __runtimeExposeMutableGlobal = getRuntimeExposeMutableGlobal();\n __runtimeExposeMutableGlobal(\"module\", { exports: {} });\n __runtimeExposeMutableGlobal(\"exports\", globalThis.module.exports);\n})();\n", "overrideProcessCwd": "\"use strict\";\n(() => {\n // ../core/isolate-runtime/src/inject/override-process-cwd.ts\n var __cwd = globalThis.__runtimeProcessCwdOverride;\n if (typeof __cwd === \"string\") {\n process.cwd = () => __cwd;\n }\n})();\n", "overrideProcessEnv": "\"use strict\";\n(() => {\n // ../core/isolate-runtime/src/inject/override-process-env.ts\n var __envPatch = globalThis.__runtimeProcessEnvOverride;\n if (__envPatch && typeof __envPatch === \"object\") {\n Object.assign(process.env, __envPatch);\n }\n})();\n", - "requireSetup": "\"use strict\";\n(() => {\n // ../core/isolate-runtime/src/inject/require-setup.ts\n var __requireExposeCustomGlobal = typeof globalThis.__runtimeExposeCustomGlobal === \"function\" ? globalThis.__runtimeExposeCustomGlobal : function exposeCustomGlobal(name2, value) {\n Object.defineProperty(globalThis, name2, {\n value,\n writable: false,\n configurable: false,\n enumerable: true\n });\n };\n if (typeof globalThis.AbortController === \"undefined\" || typeof globalThis.AbortSignal === \"undefined\") {\n class AbortSignal {\n constructor() {\n this.aborted = false;\n this.reason = void 0;\n this.onabort = null;\n this._listeners = [];\n }\n addEventListener(type, listener) {\n if (type !== \"abort\" || typeof listener !== \"function\") return;\n this._listeners.push(listener);\n }\n removeEventListener(type, listener) {\n if (type !== \"abort\" || typeof listener !== \"function\") return;\n const index = this._listeners.indexOf(listener);\n if (index !== -1) {\n this._listeners.splice(index, 1);\n }\n }\n dispatchEvent(event) {\n if (!event || event.type !== \"abort\") return false;\n if (typeof this.onabort === \"function\") {\n try {\n this.onabort.call(this, event);\n } catch {\n }\n }\n const listeners = this._listeners.slice();\n for (const listener of listeners) {\n try {\n listener.call(this, event);\n } catch {\n }\n }\n return true;\n }\n }\n class AbortController {\n constructor() {\n this.signal = new AbortSignal();\n }\n abort(reason) {\n if (this.signal.aborted) return;\n this.signal.aborted = true;\n this.signal.reason = reason;\n this.signal.dispatchEvent({ type: \"abort\" });\n }\n }\n __requireExposeCustomGlobal(\"AbortSignal\", AbortSignal);\n __requireExposeCustomGlobal(\"AbortController\", AbortController);\n }\n if (typeof globalThis.structuredClone !== \"function\") {\n let structuredClonePolyfill = function(value) {\n if (value === null || typeof value !== \"object\") {\n return value;\n }\n if (value instanceof ArrayBuffer) {\n return value.slice(0);\n }\n if (ArrayBuffer.isView(value)) {\n if (value instanceof Uint8Array) {\n return new Uint8Array(value);\n }\n return new value.constructor(value);\n }\n return JSON.parse(JSON.stringify(value));\n };\n structuredClonePolyfill2 = structuredClonePolyfill;\n __requireExposeCustomGlobal(\"structuredClone\", structuredClonePolyfill);\n }\n var structuredClonePolyfill2;\n if (typeof globalThis.btoa !== \"function\") {\n __requireExposeCustomGlobal(\"btoa\", function btoa(input) {\n return Buffer.from(String(input), \"binary\").toString(\"base64\");\n });\n }\n if (typeof globalThis.atob !== \"function\") {\n __requireExposeCustomGlobal(\"atob\", function atob(input) {\n return Buffer.from(String(input), \"base64\").toString(\"binary\");\n });\n }\n function _dirname(p) {\n const lastSlash = p.lastIndexOf(\"/\");\n if (lastSlash === -1) return \".\";\n if (lastSlash === 0) return \"/\";\n return p.slice(0, lastSlash);\n }\n if (typeof globalThis.TextDecoder === \"function\") {\n _OrigTextDecoder = globalThis.TextDecoder;\n _utf8Aliases = {\n \"utf-8\": true,\n \"utf8\": true,\n \"unicode-1-1-utf-8\": true,\n \"ascii\": true,\n \"us-ascii\": true,\n \"iso-8859-1\": true,\n \"latin1\": true,\n \"binary\": true,\n \"windows-1252\": true,\n \"utf-16le\": true,\n \"utf-16\": true,\n \"ucs-2\": true,\n \"ucs2\": true\n };\n globalThis.TextDecoder = function TextDecoder(encoding, options) {\n var label = encoding !== void 0 ? String(encoding).toLowerCase().replace(/\\s/g, \"\") : \"utf-8\";\n if (_utf8Aliases[label]) {\n return new _OrigTextDecoder(\"utf-8\", options);\n }\n return new _OrigTextDecoder(encoding, options);\n };\n globalThis.TextDecoder.prototype = _OrigTextDecoder.prototype;\n }\n var _OrigTextDecoder;\n var _utf8Aliases;\n function _patchPolyfill(name2, result2) {\n if (typeof result2 !== \"object\" && typeof result2 !== \"function\" || result2 === null) {\n return result2;\n }\n if (name2 === \"buffer\") {\n const maxLength = typeof result2.kMaxLength === \"number\" ? result2.kMaxLength : 2147483647;\n const maxStringLength = typeof result2.kStringMaxLength === \"number\" ? result2.kStringMaxLength : 536870888;\n if (typeof result2.constants !== \"object\" || result2.constants === null) {\n result2.constants = {};\n }\n if (typeof result2.constants.MAX_LENGTH !== \"number\") {\n result2.constants.MAX_LENGTH = maxLength;\n }\n if (typeof result2.constants.MAX_STRING_LENGTH !== \"number\") {\n result2.constants.MAX_STRING_LENGTH = maxStringLength;\n }\n if (typeof result2.kMaxLength !== \"number\") {\n result2.kMaxLength = maxLength;\n }\n if (typeof result2.kStringMaxLength !== \"number\") {\n result2.kStringMaxLength = maxStringLength;\n }\n const BufferCtor = result2.Buffer;\n if ((typeof BufferCtor === \"function\" || typeof BufferCtor === \"object\") && BufferCtor !== null) {\n if (typeof BufferCtor.kMaxLength !== \"number\") {\n BufferCtor.kMaxLength = maxLength;\n }\n if (typeof BufferCtor.kStringMaxLength !== \"number\") {\n BufferCtor.kStringMaxLength = maxStringLength;\n }\n if (typeof BufferCtor.constants !== \"object\" || BufferCtor.constants === null) {\n BufferCtor.constants = result2.constants;\n }\n var proto = BufferCtor.prototype;\n if (proto && typeof proto.utf8Slice !== \"function\") {\n var encodings = [\"utf8\", \"latin1\", \"ascii\", \"hex\", \"base64\", \"ucs2\", \"utf16le\"];\n for (var ei = 0; ei < encodings.length; ei++) {\n var enc = encodings[ei];\n (function(e) {\n if (typeof proto[e + \"Slice\"] !== \"function\") {\n proto[e + \"Slice\"] = function(start, end) {\n return this.toString(e, start, end);\n };\n }\n if (typeof proto[e + \"Write\"] !== \"function\") {\n proto[e + \"Write\"] = function(string, offset, length) {\n return this.write(string, offset, length, e);\n };\n }\n })(enc);\n }\n }\n }\n return result2;\n }\n if (name2 === \"util\" && typeof result2.formatWithOptions === \"undefined\" && typeof result2.format === \"function\") {\n result2.formatWithOptions = function formatWithOptions(inspectOptions, ...args) {\n return result2.format.apply(null, args);\n };\n return result2;\n }\n if (name2 === \"url\") {\n const OriginalURL = result2.URL;\n if (typeof OriginalURL !== \"function\" || OriginalURL._patched) {\n return result2;\n }\n const PatchedURL = function PatchedURL2(url, base) {\n if (typeof url === \"string\" && url.startsWith(\"file:\") && !url.startsWith(\"file://\") && base === void 0) {\n if (typeof process !== \"undefined\" && typeof process.cwd === \"function\") {\n const cwd = process.cwd();\n if (cwd) {\n try {\n return new OriginalURL(url, \"file://\" + cwd + \"/\");\n } catch (e) {\n }\n }\n }\n }\n return base !== void 0 ? new OriginalURL(url, base) : new OriginalURL(url);\n };\n Object.keys(OriginalURL).forEach(function(key) {\n try {\n PatchedURL[key] = OriginalURL[key];\n } catch {\n }\n });\n Object.setPrototypeOf(PatchedURL, OriginalURL);\n PatchedURL.prototype = OriginalURL.prototype;\n PatchedURL._patched = true;\n const descriptor = Object.getOwnPropertyDescriptor(result2, \"URL\");\n if (descriptor && descriptor.configurable !== true && descriptor.writable !== true && typeof descriptor.set !== \"function\") {\n return result2;\n }\n try {\n result2.URL = PatchedURL;\n } catch {\n try {\n Object.defineProperty(result2, \"URL\", {\n value: PatchedURL,\n writable: true,\n configurable: true,\n enumerable: descriptor?.enumerable ?? true\n });\n } catch {\n }\n }\n return result2;\n }\n if (name2 === \"zlib\") {\n if (typeof result2.constants !== \"object\" || result2.constants === null) {\n var zlibConstants = {};\n var constKeys = Object.keys(result2);\n for (var ci = 0; ci < constKeys.length; ci++) {\n var ck = constKeys[ci];\n if (ck.indexOf(\"Z_\") === 0 && typeof result2[ck] === \"number\") {\n zlibConstants[ck] = result2[ck];\n }\n }\n if (typeof zlibConstants.DEFLATE !== \"number\") zlibConstants.DEFLATE = 1;\n if (typeof zlibConstants.INFLATE !== \"number\") zlibConstants.INFLATE = 2;\n if (typeof zlibConstants.GZIP !== \"number\") zlibConstants.GZIP = 3;\n if (typeof zlibConstants.DEFLATERAW !== \"number\") zlibConstants.DEFLATERAW = 4;\n if (typeof zlibConstants.INFLATERAW !== \"number\") zlibConstants.INFLATERAW = 5;\n if (typeof zlibConstants.UNZIP !== \"number\") zlibConstants.UNZIP = 6;\n if (typeof zlibConstants.GUNZIP !== \"number\") zlibConstants.GUNZIP = 7;\n result2.constants = zlibConstants;\n }\n return result2;\n }\n if (name2 === \"crypto\") {\n if (typeof _cryptoHashDigest !== \"undefined\") {\n let SandboxHash2 = function(algorithm) {\n this._algorithm = algorithm;\n this._chunks = [];\n };\n var SandboxHash = SandboxHash2;\n SandboxHash2.prototype.update = function update(data, inputEncoding) {\n if (typeof data === \"string\") {\n this._chunks.push(Buffer.from(data, inputEncoding || \"utf8\"));\n } else {\n this._chunks.push(Buffer.from(data));\n }\n return this;\n };\n SandboxHash2.prototype.digest = function digest(encoding) {\n var combined = Buffer.concat(this._chunks);\n var resultBase64 = _cryptoHashDigest.applySync(void 0, [\n this._algorithm,\n combined.toString(\"base64\")\n ]);\n var resultBuffer = Buffer.from(resultBase64, \"base64\");\n if (!encoding || encoding === \"buffer\") return resultBuffer;\n return resultBuffer.toString(encoding);\n };\n SandboxHash2.prototype.copy = function copy() {\n var c = new SandboxHash2(this._algorithm);\n c._chunks = this._chunks.slice();\n return c;\n };\n SandboxHash2.prototype.write = function write(data, encoding) {\n this.update(data, encoding);\n return true;\n };\n SandboxHash2.prototype.end = function end(data, encoding) {\n if (data) this.update(data, encoding);\n };\n result2.createHash = function createHash(algorithm) {\n return new SandboxHash2(algorithm);\n };\n result2.Hash = SandboxHash2;\n }\n if (typeof _cryptoHmacDigest !== \"undefined\") {\n let SandboxHmac2 = function(algorithm, key) {\n this._algorithm = algorithm;\n if (typeof key === \"string\") {\n this._key = Buffer.from(key, \"utf8\");\n } else if (key && typeof key === \"object\" && key._pem !== void 0) {\n this._key = Buffer.from(key._pem, \"utf8\");\n } else {\n this._key = Buffer.from(key);\n }\n this._chunks = [];\n };\n var SandboxHmac = SandboxHmac2;\n SandboxHmac2.prototype.update = function update(data, inputEncoding) {\n if (typeof data === \"string\") {\n this._chunks.push(Buffer.from(data, inputEncoding || \"utf8\"));\n } else {\n this._chunks.push(Buffer.from(data));\n }\n return this;\n };\n SandboxHmac2.prototype.digest = function digest(encoding) {\n var combined = Buffer.concat(this._chunks);\n var resultBase64 = _cryptoHmacDigest.applySync(void 0, [\n this._algorithm,\n this._key.toString(\"base64\"),\n combined.toString(\"base64\")\n ]);\n var resultBuffer = Buffer.from(resultBase64, \"base64\");\n if (!encoding || encoding === \"buffer\") return resultBuffer;\n return resultBuffer.toString(encoding);\n };\n SandboxHmac2.prototype.copy = function copy() {\n var c = new SandboxHmac2(this._algorithm, this._key);\n c._chunks = this._chunks.slice();\n return c;\n };\n SandboxHmac2.prototype.write = function write(data, encoding) {\n this.update(data, encoding);\n return true;\n };\n SandboxHmac2.prototype.end = function end(data, encoding) {\n if (data) this.update(data, encoding);\n };\n result2.createHmac = function createHmac(algorithm, key) {\n return new SandboxHmac2(algorithm, key);\n };\n result2.Hmac = SandboxHmac2;\n }\n if (typeof _cryptoRandomFill !== \"undefined\") {\n result2.randomBytes = function randomBytes(size, callback) {\n if (typeof size !== \"number\" || size < 0 || size !== (size | 0)) {\n var err = new TypeError('The \"size\" argument must be of type number. Received type ' + typeof size);\n if (typeof callback === \"function\") {\n callback(err);\n return;\n }\n throw err;\n }\n if (size > 2147483647) {\n var rangeErr = new RangeError('The value of \"size\" is out of range. It must be >= 0 && <= 2147483647. Received ' + size);\n if (typeof callback === \"function\") {\n callback(rangeErr);\n return;\n }\n throw rangeErr;\n }\n var buf = Buffer.alloc(size);\n var offset = 0;\n while (offset < size) {\n var chunk = Math.min(size - offset, 65536);\n var base64 = _cryptoRandomFill.applySync(void 0, [chunk]);\n var hostBytes = Buffer.from(base64, \"base64\");\n hostBytes.copy(buf, offset);\n offset += chunk;\n }\n if (typeof callback === \"function\") {\n callback(null, buf);\n return;\n }\n return buf;\n };\n result2.randomFillSync = function randomFillSync(buffer, offset, size) {\n if (offset === void 0) offset = 0;\n var byteLength = buffer.byteLength !== void 0 ? buffer.byteLength : buffer.length;\n if (size === void 0) size = byteLength - offset;\n if (offset < 0 || size < 0 || offset + size > byteLength) {\n throw new RangeError('The value of \"offset + size\" is out of range.');\n }\n var bytes = new Uint8Array(buffer.buffer || buffer, buffer.byteOffset ? buffer.byteOffset + offset : offset, size);\n var filled = 0;\n while (filled < size) {\n var chunk = Math.min(size - filled, 65536);\n var base64 = _cryptoRandomFill.applySync(void 0, [chunk]);\n var hostBytes = Buffer.from(base64, \"base64\");\n bytes.set(hostBytes, filled);\n filled += chunk;\n }\n return buffer;\n };\n result2.randomFill = function randomFill(buffer, offsetOrCb, sizeOrCb, callback) {\n var offset = 0;\n var size;\n var cb;\n if (typeof offsetOrCb === \"function\") {\n cb = offsetOrCb;\n } else if (typeof sizeOrCb === \"function\") {\n offset = offsetOrCb || 0;\n cb = sizeOrCb;\n } else {\n offset = offsetOrCb || 0;\n size = sizeOrCb;\n cb = callback;\n }\n if (typeof cb !== \"function\") {\n throw new TypeError(\"Callback must be a function\");\n }\n try {\n result2.randomFillSync(buffer, offset, size);\n cb(null, buffer);\n } catch (e) {\n cb(e);\n }\n };\n result2.randomInt = function randomInt(minOrMax, maxOrCb, callback) {\n var min, max, cb;\n if (typeof maxOrCb === \"function\" || maxOrCb === void 0) {\n min = 0;\n max = minOrMax;\n cb = maxOrCb;\n } else {\n min = minOrMax;\n max = maxOrCb;\n cb = callback;\n }\n if (!Number.isSafeInteger(min)) {\n var minErr = new TypeError('The \"min\" argument must be a safe integer');\n if (typeof cb === \"function\") {\n cb(minErr);\n return;\n }\n throw minErr;\n }\n if (!Number.isSafeInteger(max)) {\n var maxErr = new TypeError('The \"max\" argument must be a safe integer');\n if (typeof cb === \"function\") {\n cb(maxErr);\n return;\n }\n throw maxErr;\n }\n if (max <= min) {\n var rangeErr2 = new RangeError('The value of \"max\" is out of range. It must be greater than the value of \"min\" (' + min + \")\");\n if (typeof cb === \"function\") {\n cb(rangeErr2);\n return;\n }\n throw rangeErr2;\n }\n var range = max - min;\n var bytes = 6;\n var maxValid = Math.pow(2, 48) - Math.pow(2, 48) % range;\n var val;\n do {\n var base64 = _cryptoRandomFill.applySync(void 0, [bytes]);\n var buf = Buffer.from(base64, \"base64\");\n val = buf.readUIntBE(0, bytes);\n } while (val >= maxValid);\n var result22 = min + val % range;\n if (typeof cb === \"function\") {\n cb(null, result22);\n return;\n }\n return result22;\n };\n }\n if (typeof _cryptoPbkdf2 !== \"undefined\") {\n result2.pbkdf2Sync = function pbkdf2Sync(password, salt, iterations, keylen, digest) {\n var pwBuf = typeof password === \"string\" ? Buffer.from(password, \"utf8\") : Buffer.from(password);\n var saltBuf = typeof salt === \"string\" ? Buffer.from(salt, \"utf8\") : Buffer.from(salt);\n var resultBase64 = _cryptoPbkdf2.applySync(void 0, [\n pwBuf.toString(\"base64\"),\n saltBuf.toString(\"base64\"),\n iterations,\n keylen,\n digest\n ]);\n return Buffer.from(resultBase64, \"base64\");\n };\n result2.pbkdf2 = function pbkdf2(password, salt, iterations, keylen, digest, callback) {\n try {\n var derived = result2.pbkdf2Sync(password, salt, iterations, keylen, digest);\n callback(null, derived);\n } catch (e) {\n callback(e);\n }\n };\n }\n if (typeof _cryptoScrypt !== \"undefined\") {\n result2.scryptSync = function scryptSync(password, salt, keylen, options) {\n var pwBuf = typeof password === \"string\" ? Buffer.from(password, \"utf8\") : Buffer.from(password);\n var saltBuf = typeof salt === \"string\" ? Buffer.from(salt, \"utf8\") : Buffer.from(salt);\n var opts = {};\n if (options) {\n if (options.N !== void 0) opts.N = options.N;\n if (options.r !== void 0) opts.r = options.r;\n if (options.p !== void 0) opts.p = options.p;\n if (options.maxmem !== void 0) opts.maxmem = options.maxmem;\n if (options.cost !== void 0) opts.N = options.cost;\n if (options.blockSize !== void 0) opts.r = options.blockSize;\n if (options.parallelization !== void 0) opts.p = options.parallelization;\n }\n var resultBase64 = _cryptoScrypt.applySync(void 0, [\n pwBuf.toString(\"base64\"),\n saltBuf.toString(\"base64\"),\n keylen,\n JSON.stringify(opts)\n ]);\n return Buffer.from(resultBase64, \"base64\");\n };\n result2.scrypt = function scrypt(password, salt, keylen, optionsOrCb, callback) {\n var opts = optionsOrCb;\n var cb = callback;\n if (typeof optionsOrCb === \"function\") {\n opts = void 0;\n cb = optionsOrCb;\n }\n try {\n var derived = result2.scryptSync(password, salt, keylen, opts);\n cb(null, derived);\n } catch (e) {\n cb(e);\n }\n };\n }\n if (typeof _cryptoCipheriv !== \"undefined\") {\n let SandboxCipher2 = function(algorithm, key, iv) {\n this._algorithm = algorithm;\n this._key = typeof key === \"string\" ? Buffer.from(key, \"utf8\") : Buffer.from(key);\n this._iv = typeof iv === \"string\" ? Buffer.from(iv, \"utf8\") : Buffer.from(iv);\n this._authTag = null;\n this._finalized = false;\n if (_useSessionCipher) {\n this._sessionId = _cryptoCipherivCreate.applySync(void 0, [\n \"cipher\",\n algorithm,\n this._key.toString(\"base64\"),\n this._iv.toString(\"base64\"),\n \"\"\n ]);\n } else {\n this._chunks = [];\n }\n };\n var SandboxCipher = SandboxCipher2;\n var _useSessionCipher = typeof _cryptoCipherivCreate !== \"undefined\";\n SandboxCipher2.prototype.update = function update(data, inputEncoding, outputEncoding) {\n var buf;\n if (typeof data === \"string\") {\n buf = Buffer.from(data, inputEncoding || \"utf8\");\n } else {\n buf = Buffer.from(data);\n }\n if (_useSessionCipher) {\n var resultBase64 = _cryptoCipherivUpdate.applySync(void 0, [this._sessionId, buf.toString(\"base64\")]);\n var resultBuffer = Buffer.from(resultBase64, \"base64\");\n if (outputEncoding && outputEncoding !== \"buffer\") return resultBuffer.toString(outputEncoding);\n return resultBuffer;\n }\n this._chunks.push(buf);\n if (outputEncoding && outputEncoding !== \"buffer\") return \"\";\n return Buffer.alloc(0);\n };\n SandboxCipher2.prototype.final = function final(outputEncoding) {\n if (this._finalized) throw new Error(\"Attempting to call final() after already finalized\");\n this._finalized = true;\n var parsed;\n if (_useSessionCipher) {\n var resultJson = _cryptoCipherivFinal.applySync(void 0, [this._sessionId]);\n parsed = JSON.parse(resultJson);\n } else {\n var combined = Buffer.concat(this._chunks);\n var resultJson2 = _cryptoCipheriv.applySync(void 0, [\n this._algorithm,\n this._key.toString(\"base64\"),\n this._iv.toString(\"base64\"),\n combined.toString(\"base64\")\n ]);\n parsed = JSON.parse(resultJson2);\n }\n if (parsed.authTag) {\n this._authTag = Buffer.from(parsed.authTag, \"base64\");\n }\n var resultBuffer = Buffer.from(parsed.data, \"base64\");\n if (outputEncoding && outputEncoding !== \"buffer\") return resultBuffer.toString(outputEncoding);\n return resultBuffer;\n };\n SandboxCipher2.prototype.getAuthTag = function getAuthTag() {\n if (!this._finalized) throw new Error(\"Cannot call getAuthTag before final()\");\n if (!this._authTag) throw new Error(\"Auth tag is only available for GCM ciphers\");\n return this._authTag;\n };\n SandboxCipher2.prototype.setAAD = function setAAD() {\n return this;\n };\n SandboxCipher2.prototype.setAutoPadding = function setAutoPadding() {\n return this;\n };\n result2.createCipheriv = function createCipheriv(algorithm, key, iv) {\n return new SandboxCipher2(algorithm, key, iv);\n };\n result2.Cipheriv = SandboxCipher2;\n }\n if (typeof _cryptoDecipheriv !== \"undefined\") {\n let SandboxDecipher2 = function(algorithm, key, iv) {\n this._algorithm = algorithm;\n this._key = typeof key === \"string\" ? Buffer.from(key, \"utf8\") : Buffer.from(key);\n this._iv = typeof iv === \"string\" ? Buffer.from(iv, \"utf8\") : Buffer.from(iv);\n this._authTag = null;\n this._finalized = false;\n this._sessionCreated = false;\n if (!_useSessionCipher) {\n this._chunks = [];\n }\n };\n var SandboxDecipher = SandboxDecipher2;\n SandboxDecipher2.prototype._ensureSession = function _ensureSession() {\n if (_useSessionCipher && !this._sessionCreated) {\n this._sessionCreated = true;\n var options = {};\n if (this._authTag) {\n options.authTag = this._authTag.toString(\"base64\");\n }\n this._sessionId = _cryptoCipherivCreate.applySync(void 0, [\n \"decipher\",\n this._algorithm,\n this._key.toString(\"base64\"),\n this._iv.toString(\"base64\"),\n JSON.stringify(options)\n ]);\n }\n };\n SandboxDecipher2.prototype.update = function update(data, inputEncoding, outputEncoding) {\n var buf;\n if (typeof data === \"string\") {\n buf = Buffer.from(data, inputEncoding || \"utf8\");\n } else {\n buf = Buffer.from(data);\n }\n if (_useSessionCipher) {\n this._ensureSession();\n var resultBase64 = _cryptoCipherivUpdate.applySync(void 0, [this._sessionId, buf.toString(\"base64\")]);\n var resultBuffer = Buffer.from(resultBase64, \"base64\");\n if (outputEncoding && outputEncoding !== \"buffer\") return resultBuffer.toString(outputEncoding);\n return resultBuffer;\n }\n this._chunks.push(buf);\n if (outputEncoding && outputEncoding !== \"buffer\") return \"\";\n return Buffer.alloc(0);\n };\n SandboxDecipher2.prototype.final = function final(outputEncoding) {\n if (this._finalized) throw new Error(\"Attempting to call final() after already finalized\");\n this._finalized = true;\n var resultBuffer;\n if (_useSessionCipher) {\n this._ensureSession();\n var resultJson = _cryptoCipherivFinal.applySync(void 0, [this._sessionId]);\n var parsed = JSON.parse(resultJson);\n resultBuffer = Buffer.from(parsed.data, \"base64\");\n } else {\n var combined = Buffer.concat(this._chunks);\n var options = {};\n if (this._authTag) {\n options.authTag = this._authTag.toString(\"base64\");\n }\n var resultBase64 = _cryptoDecipheriv.applySync(void 0, [\n this._algorithm,\n this._key.toString(\"base64\"),\n this._iv.toString(\"base64\"),\n combined.toString(\"base64\"),\n JSON.stringify(options)\n ]);\n resultBuffer = Buffer.from(resultBase64, \"base64\");\n }\n if (outputEncoding && outputEncoding !== \"buffer\") return resultBuffer.toString(outputEncoding);\n return resultBuffer;\n };\n SandboxDecipher2.prototype.setAuthTag = function setAuthTag(tag) {\n this._authTag = typeof tag === \"string\" ? Buffer.from(tag, \"base64\") : Buffer.from(tag);\n return this;\n };\n SandboxDecipher2.prototype.setAAD = function setAAD() {\n return this;\n };\n SandboxDecipher2.prototype.setAutoPadding = function setAutoPadding() {\n return this;\n };\n result2.createDecipheriv = function createDecipheriv(algorithm, key, iv) {\n return new SandboxDecipher2(algorithm, key, iv);\n };\n result2.Decipheriv = SandboxDecipher2;\n }\n if (typeof _cryptoSign !== \"undefined\") {\n result2.sign = function sign(algorithm, data, key) {\n var dataBuf = typeof data === \"string\" ? Buffer.from(data, \"utf8\") : Buffer.from(data);\n var keyPem;\n if (typeof key === \"string\") {\n keyPem = key;\n } else if (key && typeof key === \"object\" && key._pem) {\n keyPem = key._pem;\n } else if (Buffer.isBuffer(key)) {\n keyPem = key.toString(\"utf8\");\n } else {\n keyPem = String(key);\n }\n var sigBase64 = _cryptoSign.applySync(void 0, [\n algorithm,\n dataBuf.toString(\"base64\"),\n keyPem\n ]);\n return Buffer.from(sigBase64, \"base64\");\n };\n }\n if (typeof _cryptoVerify !== \"undefined\") {\n result2.verify = function verify(algorithm, data, key, signature) {\n var dataBuf = typeof data === \"string\" ? Buffer.from(data, \"utf8\") : Buffer.from(data);\n var keyPem;\n if (typeof key === \"string\") {\n keyPem = key;\n } else if (key && typeof key === \"object\" && key._pem) {\n keyPem = key._pem;\n } else if (Buffer.isBuffer(key)) {\n keyPem = key.toString(\"utf8\");\n } else {\n keyPem = String(key);\n }\n var sigBuf = typeof signature === \"string\" ? Buffer.from(signature, \"base64\") : Buffer.from(signature);\n return _cryptoVerify.applySync(void 0, [\n algorithm,\n dataBuf.toString(\"base64\"),\n keyPem,\n sigBuf.toString(\"base64\")\n ]);\n };\n }\n if (typeof _cryptoGenerateKeyPairSync !== \"undefined\") {\n let SandboxKeyObject2 = function(type, pem) {\n this.type = type;\n this._pem = pem;\n };\n var SandboxKeyObject = SandboxKeyObject2;\n SandboxKeyObject2.prototype.export = function exportKey(options) {\n if (!options || options.format === \"pem\") {\n return this._pem;\n }\n if (options.format === \"der\") {\n var lines = this._pem.split(\"\\n\").filter(function(l) {\n return l && l.indexOf(\"-----\") !== 0;\n });\n return Buffer.from(lines.join(\"\"), \"base64\");\n }\n return this._pem;\n };\n SandboxKeyObject2.prototype.toString = function() {\n return this._pem;\n };\n result2.generateKeyPairSync = function generateKeyPairSync(type, options) {\n var opts = {};\n if (options) {\n if (options.modulusLength !== void 0) opts.modulusLength = options.modulusLength;\n if (options.publicExponent !== void 0) opts.publicExponent = options.publicExponent;\n if (options.namedCurve !== void 0) opts.namedCurve = options.namedCurve;\n if (options.divisorLength !== void 0) opts.divisorLength = options.divisorLength;\n if (options.primeLength !== void 0) opts.primeLength = options.primeLength;\n }\n var resultJson = _cryptoGenerateKeyPairSync.applySync(void 0, [\n type,\n JSON.stringify(opts)\n ]);\n var parsed = JSON.parse(resultJson);\n if (options && options.publicKeyEncoding && options.privateKeyEncoding) {\n return { publicKey: parsed.publicKey, privateKey: parsed.privateKey };\n }\n return {\n publicKey: new SandboxKeyObject2(\"public\", parsed.publicKey),\n privateKey: new SandboxKeyObject2(\"private\", parsed.privateKey)\n };\n };\n result2.generateKeyPair = function generateKeyPair(type, options, callback) {\n try {\n var pair = result2.generateKeyPairSync(type, options);\n callback(null, pair.publicKey, pair.privateKey);\n } catch (e) {\n callback(e);\n }\n };\n result2.createPublicKey = function createPublicKey(key) {\n if (typeof key === \"string\") {\n if (key.indexOf(\"-----BEGIN\") === -1) {\n throw new TypeError(\"error:0900006e:PEM routines:OPENSSL_internal:NO_START_LINE\");\n }\n return new SandboxKeyObject2(\"public\", key);\n }\n if (key && typeof key === \"object\" && key._pem) {\n return new SandboxKeyObject2(\"public\", key._pem);\n }\n if (key && typeof key === \"object\" && key.type === \"private\") {\n return new SandboxKeyObject2(\"public\", key._pem);\n }\n if (key && typeof key === \"object\" && key.key) {\n var keyData = typeof key.key === \"string\" ? key.key : key.key.toString(\"utf8\");\n return new SandboxKeyObject2(\"public\", keyData);\n }\n if (Buffer.isBuffer(key)) {\n var keyStr = key.toString(\"utf8\");\n if (keyStr.indexOf(\"-----BEGIN\") === -1) {\n throw new TypeError(\"error:0900006e:PEM routines:OPENSSL_internal:NO_START_LINE\");\n }\n return new SandboxKeyObject2(\"public\", keyStr);\n }\n return new SandboxKeyObject2(\"public\", String(key));\n };\n result2.createPrivateKey = function createPrivateKey(key) {\n if (typeof key === \"string\") {\n if (key.indexOf(\"-----BEGIN\") === -1) {\n throw new TypeError(\"error:0900006e:PEM routines:OPENSSL_internal:NO_START_LINE\");\n }\n return new SandboxKeyObject2(\"private\", key);\n }\n if (key && typeof key === \"object\" && key._pem) {\n return new SandboxKeyObject2(\"private\", key._pem);\n }\n if (key && typeof key === \"object\" && key.key) {\n var keyData = typeof key.key === \"string\" ? key.key : key.key.toString(\"utf8\");\n return new SandboxKeyObject2(\"private\", keyData);\n }\n if (Buffer.isBuffer(key)) {\n var keyStr = key.toString(\"utf8\");\n if (keyStr.indexOf(\"-----BEGIN\") === -1) {\n throw new TypeError(\"error:0900006e:PEM routines:OPENSSL_internal:NO_START_LINE\");\n }\n return new SandboxKeyObject2(\"private\", keyStr);\n }\n return new SandboxKeyObject2(\"private\", String(key));\n };\n result2.createSecretKey = function createSecretKey(key) {\n if (typeof key === \"string\") {\n return new SandboxKeyObject2(\"secret\", key);\n }\n if (Buffer.isBuffer(key) || key instanceof Uint8Array) {\n return new SandboxKeyObject2(\"secret\", Buffer.from(key).toString(\"utf8\"));\n }\n return new SandboxKeyObject2(\"secret\", String(key));\n };\n result2.KeyObject = SandboxKeyObject2;\n }\n if (typeof _cryptoSubtle !== \"undefined\") {\n let SandboxCryptoKey2 = function(keyData) {\n this.type = keyData.type;\n this.extractable = keyData.extractable;\n this.algorithm = keyData.algorithm;\n this.usages = keyData.usages;\n this._keyData = keyData;\n }, toBase642 = function(data) {\n if (typeof data === \"string\") return Buffer.from(data).toString(\"base64\");\n if (data instanceof ArrayBuffer) return Buffer.from(new Uint8Array(data)).toString(\"base64\");\n if (ArrayBuffer.isView(data)) return Buffer.from(new Uint8Array(data.buffer, data.byteOffset, data.byteLength)).toString(\"base64\");\n return Buffer.from(data).toString(\"base64\");\n }, subtleCall2 = function(reqObj) {\n return _cryptoSubtle.applySync(void 0, [JSON.stringify(reqObj)]);\n }, normalizeAlgo2 = function(algorithm) {\n if (typeof algorithm === \"string\") return { name: algorithm };\n return algorithm;\n };\n var SandboxCryptoKey = SandboxCryptoKey2, toBase64 = toBase642, subtleCall = subtleCall2, normalizeAlgo = normalizeAlgo2;\n var SandboxSubtle = {};\n SandboxSubtle.digest = function digest(algorithm, data) {\n return Promise.resolve().then(function() {\n var algo = normalizeAlgo2(algorithm);\n var result22 = JSON.parse(subtleCall2({\n op: \"digest\",\n algorithm: algo.name,\n data: toBase642(data)\n }));\n var buf = Buffer.from(result22.data, \"base64\");\n return buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.byteLength);\n });\n };\n SandboxSubtle.generateKey = function generateKey(algorithm, extractable, keyUsages) {\n return Promise.resolve().then(function() {\n var algo = normalizeAlgo2(algorithm);\n var reqAlgo = Object.assign({}, algo);\n if (reqAlgo.hash) reqAlgo.hash = normalizeAlgo2(reqAlgo.hash);\n if (reqAlgo.publicExponent) {\n reqAlgo.publicExponent = Buffer.from(new Uint8Array(reqAlgo.publicExponent.buffer || reqAlgo.publicExponent)).toString(\"base64\");\n }\n var result22 = JSON.parse(subtleCall2({\n op: \"generateKey\",\n algorithm: reqAlgo,\n extractable,\n usages: Array.from(keyUsages)\n }));\n if (result22.publicKey && result22.privateKey) {\n return {\n publicKey: new SandboxCryptoKey2(result22.publicKey),\n privateKey: new SandboxCryptoKey2(result22.privateKey)\n };\n }\n return new SandboxCryptoKey2(result22.key);\n });\n };\n SandboxSubtle.importKey = function importKey(format, keyData, algorithm, extractable, keyUsages) {\n return Promise.resolve().then(function() {\n var algo = normalizeAlgo2(algorithm);\n var reqAlgo = Object.assign({}, algo);\n if (reqAlgo.hash) reqAlgo.hash = normalizeAlgo2(reqAlgo.hash);\n var serializedKeyData;\n if (format === \"jwk\") {\n serializedKeyData = keyData;\n } else if (format === \"raw\") {\n serializedKeyData = toBase642(keyData);\n } else {\n serializedKeyData = toBase642(keyData);\n }\n var result22 = JSON.parse(subtleCall2({\n op: \"importKey\",\n format,\n keyData: serializedKeyData,\n algorithm: reqAlgo,\n extractable,\n usages: Array.from(keyUsages)\n }));\n return new SandboxCryptoKey2(result22.key);\n });\n };\n SandboxSubtle.exportKey = function exportKey(format, key) {\n return Promise.resolve().then(function() {\n var result22 = JSON.parse(subtleCall2({\n op: \"exportKey\",\n format,\n key: key._keyData\n }));\n if (format === \"jwk\") return result22.jwk;\n var buf = Buffer.from(result22.data, \"base64\");\n return buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.byteLength);\n });\n };\n SandboxSubtle.encrypt = function encrypt(algorithm, key, data) {\n return Promise.resolve().then(function() {\n var algo = normalizeAlgo2(algorithm);\n var reqAlgo = Object.assign({}, algo);\n if (reqAlgo.iv) reqAlgo.iv = toBase642(reqAlgo.iv);\n if (reqAlgo.additionalData) reqAlgo.additionalData = toBase642(reqAlgo.additionalData);\n var result22 = JSON.parse(subtleCall2({\n op: \"encrypt\",\n algorithm: reqAlgo,\n key: key._keyData,\n data: toBase642(data)\n }));\n var buf = Buffer.from(result22.data, \"base64\");\n return buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.byteLength);\n });\n };\n SandboxSubtle.decrypt = function decrypt(algorithm, key, data) {\n return Promise.resolve().then(function() {\n var algo = normalizeAlgo2(algorithm);\n var reqAlgo = Object.assign({}, algo);\n if (reqAlgo.iv) reqAlgo.iv = toBase642(reqAlgo.iv);\n if (reqAlgo.additionalData) reqAlgo.additionalData = toBase642(reqAlgo.additionalData);\n var result22 = JSON.parse(subtleCall2({\n op: \"decrypt\",\n algorithm: reqAlgo,\n key: key._keyData,\n data: toBase642(data)\n }));\n var buf = Buffer.from(result22.data, \"base64\");\n return buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.byteLength);\n });\n };\n SandboxSubtle.sign = function sign(algorithm, key, data) {\n return Promise.resolve().then(function() {\n var result22 = JSON.parse(subtleCall2({\n op: \"sign\",\n algorithm: normalizeAlgo2(algorithm),\n key: key._keyData,\n data: toBase642(data)\n }));\n var buf = Buffer.from(result22.data, \"base64\");\n return buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.byteLength);\n });\n };\n SandboxSubtle.verify = function verify(algorithm, key, signature, data) {\n return Promise.resolve().then(function() {\n var result22 = JSON.parse(subtleCall2({\n op: \"verify\",\n algorithm: normalizeAlgo2(algorithm),\n key: key._keyData,\n signature: toBase642(signature),\n data: toBase642(data)\n }));\n return result22.result;\n });\n };\n SandboxSubtle.deriveBits = function deriveBits(algorithm, baseKey, length) {\n return Promise.resolve().then(function() {\n var algo = normalizeAlgo2(algorithm);\n var reqAlgo = Object.assign({}, algo);\n if (reqAlgo.salt) reqAlgo.salt = toBase642(reqAlgo.salt);\n if (reqAlgo.info) reqAlgo.info = toBase642(reqAlgo.info);\n var result22 = JSON.parse(subtleCall2({\n op: \"deriveBits\",\n algorithm: reqAlgo,\n baseKey: baseKey._keyData,\n length\n }));\n return Buffer.from(result22.data, \"base64\").buffer;\n });\n };\n SandboxSubtle.deriveKey = function deriveKey(algorithm, baseKey, derivedKeyAlgorithm, extractable, keyUsages) {\n return Promise.resolve().then(function() {\n var algo = normalizeAlgo2(algorithm);\n var reqAlgo = Object.assign({}, algo);\n if (reqAlgo.salt) reqAlgo.salt = toBase642(reqAlgo.salt);\n if (reqAlgo.info) reqAlgo.info = toBase642(reqAlgo.info);\n var result22 = JSON.parse(subtleCall2({\n op: \"deriveKey\",\n algorithm: reqAlgo,\n baseKey: baseKey._keyData,\n derivedKeyAlgorithm: normalizeAlgo2(derivedKeyAlgorithm),\n extractable,\n usages: keyUsages\n }));\n return new SandboxCryptoKey2(result22.key);\n });\n };\n result2.subtle = SandboxSubtle;\n result2.webcrypto = { subtle: SandboxSubtle, getRandomValues: result2.randomFillSync };\n }\n if (typeof result2.getCurves !== \"function\") {\n result2.getCurves = function getCurves() {\n return [\n \"prime256v1\",\n \"secp256r1\",\n \"secp384r1\",\n \"secp521r1\",\n \"secp256k1\",\n \"secp224r1\",\n \"secp192k1\"\n ];\n };\n }\n if (typeof result2.getCiphers !== \"function\") {\n result2.getCiphers = function getCiphers() {\n return [\n \"aes-128-cbc\",\n \"aes-128-gcm\",\n \"aes-192-cbc\",\n \"aes-192-gcm\",\n \"aes-256-cbc\",\n \"aes-256-gcm\",\n \"aes-128-ctr\",\n \"aes-192-ctr\",\n \"aes-256-ctr\"\n ];\n };\n }\n if (typeof result2.getHashes !== \"function\") {\n result2.getHashes = function getHashes() {\n return [\"md5\", \"sha1\", \"sha256\", \"sha384\", \"sha512\"];\n };\n }\n if (typeof result2.timingSafeEqual !== \"function\") {\n result2.timingSafeEqual = function timingSafeEqual(a, b) {\n if (a.length !== b.length) {\n throw new RangeError(\"Input buffers must have the same byte length\");\n }\n var out = 0;\n for (var i = 0; i < a.length; i++) {\n out |= a[i] ^ b[i];\n }\n return out === 0;\n };\n }\n return result2;\n }\n if (name2 === \"stream\") {\n if (typeof result2 === \"function\" && result2.prototype && typeof result2.Readable === \"function\") {\n var readableProto = result2.Readable.prototype;\n var streamProto = result2.prototype;\n if (readableProto && streamProto && !(readableProto instanceof result2)) {\n var currentParent = Object.getPrototypeOf(readableProto);\n Object.setPrototypeOf(streamProto, currentParent);\n Object.setPrototypeOf(readableProto, streamProto);\n }\n }\n return result2;\n }\n if (name2 === \"path\") {\n if (result2.win32 === null || result2.win32 === void 0) {\n result2.win32 = result2.posix || result2;\n }\n if (result2.posix === null || result2.posix === void 0) {\n result2.posix = result2;\n }\n const hasAbsoluteSegment = function(args) {\n return args.some(function(arg) {\n return typeof arg === \"string\" && arg.length > 0 && arg.charAt(0) === \"/\";\n });\n };\n const prependCwd = function(args) {\n if (hasAbsoluteSegment(args)) return;\n if (typeof process !== \"undefined\" && typeof process.cwd === \"function\") {\n const cwd = process.cwd();\n if (cwd && cwd.charAt(0) === \"/\") {\n args.unshift(cwd);\n }\n }\n };\n const originalResolve = result2.resolve;\n if (typeof originalResolve === \"function\" && !originalResolve._patchedForCwd) {\n const patchedResolve = function resolve2() {\n const args = Array.from(arguments);\n prependCwd(args);\n return originalResolve.apply(this, args);\n };\n patchedResolve._patchedForCwd = true;\n result2.resolve = patchedResolve;\n }\n if (result2.posix && typeof result2.posix.resolve === \"function\" && !result2.posix.resolve._patchedForCwd) {\n const originalPosixResolve = result2.posix.resolve;\n const patchedPosixResolve = function resolve2() {\n const args = Array.from(arguments);\n prependCwd(args);\n return originalPosixResolve.apply(this, args);\n };\n patchedPosixResolve._patchedForCwd = true;\n result2.posix.resolve = patchedPosixResolve;\n }\n }\n return result2;\n }\n var _deferredCoreModules = /* @__PURE__ */ new Set([\n \"readline\",\n \"perf_hooks\",\n \"async_hooks\",\n \"worker_threads\",\n \"diagnostics_channel\"\n ]);\n var _unsupportedCoreModules = /* @__PURE__ */ new Set([\n \"dgram\",\n \"cluster\",\n \"wasi\",\n \"inspector\",\n \"repl\",\n \"trace_events\",\n \"domain\"\n ]);\n function _unsupportedApiError(moduleName2, apiName) {\n return new Error(moduleName2 + \".\" + apiName + \" is not supported in sandbox\");\n }\n function _createDeferredModuleStub(moduleName2) {\n const methodCache = {};\n let stub = null;\n stub = new Proxy({}, {\n get(_target, prop) {\n if (prop === \"__esModule\") return false;\n if (prop === \"default\") return stub;\n if (prop === Symbol.toStringTag) return \"Module\";\n if (prop === \"then\") return void 0;\n if (typeof prop !== \"string\") return void 0;\n if (!methodCache[prop]) {\n methodCache[prop] = function deferredApiStub() {\n throw _unsupportedApiError(moduleName2, prop);\n };\n }\n return methodCache[prop];\n }\n });\n return stub;\n }\n var __internalModuleCache = _moduleCache;\n var __require = function require2(moduleName2) {\n return _requireFrom(moduleName2, _currentModule.dirname);\n };\n __requireExposeCustomGlobal(\"require\", __require);\n function _resolveFrom(moduleName2, fromDir2) {\n var resolved2;\n if (typeof _resolveModuleSync !== \"undefined\") {\n resolved2 = _resolveModuleSync.applySync(void 0, [moduleName2, fromDir2]);\n }\n if (resolved2 === null || resolved2 === void 0) {\n resolved2 = _resolveModule.applySyncPromise(void 0, [moduleName2, fromDir2, \"require\"]);\n }\n if (resolved2 === null) {\n const err = new Error(\"Cannot find module '\" + moduleName2 + \"'\");\n err.code = \"MODULE_NOT_FOUND\";\n throw err;\n }\n return resolved2;\n }\n globalThis.require.resolve = function resolve(moduleName2) {\n return _resolveFrom(moduleName2, _currentModule.dirname);\n };\n function _debugRequire(phase, moduleName2, extra) {\n if (globalThis.__sandboxRequireDebug !== true) {\n return;\n }\n if (moduleName2 !== \"rivetkit\" && moduleName2 !== \"@rivetkit/traces\" && moduleName2 !== \"@rivetkit/on-change\" && moduleName2 !== \"async_hooks\" && !moduleName2.startsWith(\"rivetkit/\") && !moduleName2.startsWith(\"@rivetkit/\")) {\n return;\n }\n if (typeof console !== \"undefined\" && typeof console.log === \"function\") {\n console.log(\n \"[sandbox.require] \" + phase + \" \" + moduleName2 + (extra ? \" \" + extra : \"\")\n );\n }\n }\n function _requireFrom(moduleName, fromDir) {\n _debugRequire(\"start\", moduleName, fromDir);\n const name = moduleName.replace(/^node:/, \"\");\n let cacheKey = name;\n let resolved = null;\n const isRelative = name.startsWith(\"./\") || name.startsWith(\"../\");\n if (!isRelative && __internalModuleCache[name]) {\n _debugRequire(\"cache-hit\", name, name);\n return __internalModuleCache[name];\n }\n if (name === \"fs\") {\n if (__internalModuleCache[\"fs\"]) return __internalModuleCache[\"fs\"];\n const fsModule = globalThis.bridge?.fs || globalThis.bridge?.default || globalThis._fsModule || {};\n __internalModuleCache[\"fs\"] = fsModule;\n _debugRequire(\"loaded\", name, \"fs-special\");\n return fsModule;\n }\n if (name === \"fs/promises\") {\n if (__internalModuleCache[\"fs/promises\"]) return __internalModuleCache[\"fs/promises\"];\n const fsModule = _requireFrom(\"fs\", fromDir);\n __internalModuleCache[\"fs/promises\"] = fsModule.promises;\n _debugRequire(\"loaded\", name, \"fs-promises-special\");\n return fsModule.promises;\n }\n if (name === \"stream/promises\") {\n if (__internalModuleCache[\"stream/promises\"]) return __internalModuleCache[\"stream/promises\"];\n const streamModule = _requireFrom(\"stream\", fromDir);\n const promisesModule = {\n finished(stream, options) {\n return new Promise(function(resolve2, reject) {\n if (typeof streamModule.finished !== \"function\") {\n resolve2();\n return;\n }\n if (options && typeof options === \"object\" && !Array.isArray(options)) {\n streamModule.finished(stream, options, function(error) {\n if (error) {\n reject(error);\n return;\n }\n resolve2();\n });\n return;\n }\n streamModule.finished(stream, function(error) {\n if (error) {\n reject(error);\n return;\n }\n resolve2();\n });\n });\n },\n pipeline() {\n const args = Array.prototype.slice.call(arguments);\n return new Promise(function(resolve2, reject) {\n if (typeof streamModule.pipeline !== \"function\") {\n reject(new Error(\"stream.pipeline is not supported in sandbox\"));\n return;\n }\n args.push(function(error) {\n if (error) {\n reject(error);\n return;\n }\n resolve2();\n });\n streamModule.pipeline.apply(streamModule, args);\n });\n }\n };\n __internalModuleCache[\"stream/promises\"] = promisesModule;\n _debugRequire(\"loaded\", name, \"stream-promises-special\");\n return promisesModule;\n }\n if (name === \"child_process\") {\n if (__internalModuleCache[\"child_process\"]) return __internalModuleCache[\"child_process\"];\n __internalModuleCache[\"child_process\"] = _childProcessModule;\n _debugRequire(\"loaded\", name, \"child-process-special\");\n return _childProcessModule;\n }\n if (name === \"net\") {\n if (__internalModuleCache[\"net\"]) return __internalModuleCache[\"net\"];\n __internalModuleCache[\"net\"] = _netModule;\n _debugRequire(\"loaded\", name, \"net-special\");\n return _netModule;\n }\n if (name === \"tls\") {\n if (__internalModuleCache[\"tls\"]) return __internalModuleCache[\"tls\"];\n __internalModuleCache[\"tls\"] = _tlsModule;\n _debugRequire(\"loaded\", name, \"tls-special\");\n return _tlsModule;\n }\n if (name === \"http\") {\n if (__internalModuleCache[\"http\"]) return __internalModuleCache[\"http\"];\n __internalModuleCache[\"http\"] = _httpModule;\n _debugRequire(\"loaded\", name, \"http-special\");\n return _httpModule;\n }\n if (name === \"https\") {\n if (__internalModuleCache[\"https\"]) return __internalModuleCache[\"https\"];\n __internalModuleCache[\"https\"] = _httpsModule;\n _debugRequire(\"loaded\", name, \"https-special\");\n return _httpsModule;\n }\n if (name === \"http2\") {\n if (__internalModuleCache[\"http2\"]) return __internalModuleCache[\"http2\"];\n __internalModuleCache[\"http2\"] = _http2Module;\n _debugRequire(\"loaded\", name, \"http2-special\");\n return _http2Module;\n }\n if (name === \"dns\") {\n if (__internalModuleCache[\"dns\"]) return __internalModuleCache[\"dns\"];\n __internalModuleCache[\"dns\"] = _dnsModule;\n _debugRequire(\"loaded\", name, \"dns-special\");\n return _dnsModule;\n }\n if (name === \"os\") {\n if (__internalModuleCache[\"os\"]) return __internalModuleCache[\"os\"];\n __internalModuleCache[\"os\"] = _osModule;\n _debugRequire(\"loaded\", name, \"os-special\");\n return _osModule;\n }\n if (name === \"module\") {\n if (__internalModuleCache[\"module\"]) return __internalModuleCache[\"module\"];\n __internalModuleCache[\"module\"] = _moduleModule;\n _debugRequire(\"loaded\", name, \"module-special\");\n return _moduleModule;\n }\n if (name === \"process\") {\n _debugRequire(\"loaded\", name, \"process-special\");\n return globalThis.process;\n }\n if (name === \"async_hooks\") {\n if (__internalModuleCache[\"async_hooks\"]) return __internalModuleCache[\"async_hooks\"];\n class AsyncLocalStorage {\n constructor() {\n this._store = void 0;\n }\n run(store, callback) {\n const previousStore = this._store;\n this._store = store;\n try {\n const args = Array.prototype.slice.call(arguments, 2);\n return callback.apply(void 0, args);\n } finally {\n this._store = previousStore;\n }\n }\n enterWith(store) {\n this._store = store;\n }\n getStore() {\n return this._store;\n }\n disable() {\n this._store = void 0;\n }\n exit(callback) {\n const previousStore = this._store;\n this._store = void 0;\n try {\n const args = Array.prototype.slice.call(arguments, 1);\n return callback.apply(void 0, args);\n } finally {\n this._store = previousStore;\n }\n }\n }\n class AsyncResource {\n constructor(type) {\n this.type = type;\n }\n runInAsyncScope(callback, thisArg) {\n const args = Array.prototype.slice.call(arguments, 2);\n return callback.apply(thisArg, args);\n }\n emitDestroy() {\n }\n }\n const asyncHooksModule = {\n AsyncLocalStorage,\n AsyncResource,\n createHook() {\n return {\n enable() {\n return this;\n },\n disable() {\n return this;\n }\n };\n },\n executionAsyncId() {\n return 1;\n },\n triggerAsyncId() {\n return 0;\n },\n executionAsyncResource() {\n return null;\n }\n };\n __internalModuleCache[\"async_hooks\"] = asyncHooksModule;\n _debugRequire(\"loaded\", name, \"async-hooks-special\");\n return asyncHooksModule;\n }\n if (name === \"diagnostics_channel\") {\n let _createChannel2 = function() {\n return {\n hasSubscribers: false,\n publish: function() {\n },\n subscribe: function() {\n },\n unsubscribe: function() {\n }\n };\n };\n var _createChannel = _createChannel2;\n if (__internalModuleCache[name]) return __internalModuleCache[name];\n const dcModule = {\n channel: function() {\n return _createChannel2();\n },\n hasSubscribers: function() {\n return false;\n },\n tracingChannel: function() {\n return {\n start: _createChannel2(),\n end: _createChannel2(),\n asyncStart: _createChannel2(),\n asyncEnd: _createChannel2(),\n error: _createChannel2(),\n traceSync: function(fn, context, thisArg) {\n var args = Array.prototype.slice.call(arguments, 3);\n return fn.apply(thisArg, args);\n },\n tracePromise: function(fn, context, thisArg) {\n var args = Array.prototype.slice.call(arguments, 3);\n return fn.apply(thisArg, args);\n },\n traceCallback: function(fn, context, thisArg) {\n var args = Array.prototype.slice.call(arguments, 3);\n return fn.apply(thisArg, args);\n }\n };\n },\n Channel: function Channel(name2) {\n this.hasSubscribers = false;\n this.publish = function() {\n };\n this.subscribe = function() {\n };\n this.unsubscribe = function() {\n };\n }\n };\n __internalModuleCache[name] = dcModule;\n _debugRequire(\"loaded\", name, \"diagnostics-channel-special\");\n return dcModule;\n }\n if (_deferredCoreModules.has(name)) {\n if (__internalModuleCache[name]) return __internalModuleCache[name];\n const deferredStub = _createDeferredModuleStub(name);\n __internalModuleCache[name] = deferredStub;\n _debugRequire(\"loaded\", name, \"deferred-stub\");\n return deferredStub;\n }\n if (_unsupportedCoreModules.has(name)) {\n throw new Error(name + \" is not supported in sandbox\");\n }\n const polyfillCode = _loadPolyfill.applySyncPromise(void 0, [name]);\n if (polyfillCode !== null) {\n if (__internalModuleCache[name]) return __internalModuleCache[name];\n const moduleObj = { exports: {} };\n _pendingModules[name] = moduleObj;\n let result = eval(polyfillCode);\n result = _patchPolyfill(name, result);\n if (typeof result === \"object\" && result !== null) {\n Object.assign(moduleObj.exports, result);\n } else {\n moduleObj.exports = result;\n }\n __internalModuleCache[name] = moduleObj.exports;\n delete _pendingModules[name];\n _debugRequire(\"loaded\", name, \"polyfill\");\n return __internalModuleCache[name];\n }\n resolved = _resolveFrom(name, fromDir);\n cacheKey = resolved;\n if (__internalModuleCache[cacheKey]) {\n _debugRequire(\"cache-hit\", name, cacheKey);\n return __internalModuleCache[cacheKey];\n }\n if (_pendingModules[cacheKey]) {\n _debugRequire(\"pending-hit\", name, cacheKey);\n return _pendingModules[cacheKey].exports;\n }\n var source;\n if (typeof _loadFileSync !== \"undefined\") {\n source = _loadFileSync.applySync(void 0, [resolved]);\n }\n if (source === null || source === void 0) {\n source = _loadFile.applySyncPromise(void 0, [resolved, \"require\"]);\n }\n if (source === null) {\n const err = new Error(\"Cannot find module '\" + resolved + \"'\");\n err.code = \"MODULE_NOT_FOUND\";\n throw err;\n }\n if (resolved.endsWith(\".json\")) {\n const parsed = JSON.parse(source);\n __internalModuleCache[cacheKey] = parsed;\n return parsed;\n }\n const normalizedSource = typeof source === \"string\" ? source.replace(/import\\.meta\\.url/g, \"__filename\").replace(/fileURLToPath\\(__filename\\)/g, \"__filename\").replace(/url\\.fileURLToPath\\(__filename\\)/g, \"__filename\").replace(/fileURLToPath\\.call\\(void 0, __filename\\)/g, \"__filename\") : source;\n const module = {\n exports: {},\n filename: resolved,\n dirname: _dirname(resolved),\n id: resolved,\n loaded: false\n };\n _pendingModules[cacheKey] = module;\n const prevModule = _currentModule;\n _currentModule = module;\n try {\n let wrapper;\n try {\n wrapper = new Function(\n \"exports\",\n \"require\",\n \"module\",\n \"__filename\",\n \"__dirname\",\n \"__dynamicImport\",\n normalizedSource + \"\\n//# sourceURL=\" + resolved\n );\n } catch (error) {\n const details = error && error.stack ? error.stack : String(error);\n throw new Error(\"failed to compile module \" + resolved + \": \" + details);\n }\n const moduleRequire = function(request) {\n return _requireFrom(request, module.dirname);\n };\n moduleRequire.resolve = function(request) {\n return _resolveFrom(request, module.dirname);\n };\n const moduleDynamicImport = function(specifier) {\n if (typeof globalThis.__dynamicImport === \"function\") {\n return globalThis.__dynamicImport(specifier, module.dirname);\n }\n return Promise.reject(new Error(\"Dynamic import is not initialized\"));\n };\n wrapper(\n module.exports,\n moduleRequire,\n module,\n resolved,\n module.dirname,\n moduleDynamicImport\n );\n module.loaded = true;\n } catch (error) {\n const details = error && error.stack ? error.stack : String(error);\n throw new Error(\"failed to execute module \" + resolved + \": \" + details);\n } finally {\n _currentModule = prevModule;\n }\n __internalModuleCache[cacheKey] = module.exports;\n delete _pendingModules[cacheKey];\n _debugRequire(\"loaded\", name, cacheKey);\n return module.exports;\n }\n __requireExposeCustomGlobal(\"_requireFrom\", _requireFrom);\n var __moduleCacheProxy = new Proxy(__internalModuleCache, {\n get(target, prop, receiver) {\n return Reflect.get(target, prop, receiver);\n },\n set(_target, prop) {\n throw new TypeError(\"Cannot set require.cache['\" + String(prop) + \"']\");\n },\n deleteProperty(_target, prop) {\n throw new TypeError(\"Cannot delete require.cache['\" + String(prop) + \"']\");\n },\n defineProperty(_target, prop) {\n throw new TypeError(\"Cannot define property '\" + String(prop) + \"' on require.cache\");\n },\n has(target, prop) {\n return Reflect.has(target, prop);\n },\n ownKeys(target) {\n return Reflect.ownKeys(target);\n },\n getOwnPropertyDescriptor(target, prop) {\n return Reflect.getOwnPropertyDescriptor(target, prop);\n }\n });\n globalThis.require.cache = __moduleCacheProxy;\n Object.defineProperty(globalThis, \"_moduleCache\", {\n value: __moduleCacheProxy,\n writable: false,\n configurable: true,\n enumerable: false\n });\n if (typeof _moduleModule !== \"undefined\") {\n if (_moduleModule.Module) {\n _moduleModule.Module._cache = __moduleCacheProxy;\n }\n _moduleModule._cache = __moduleCacheProxy;\n }\n})();\n", + "requireSetup": "\"use strict\";\n(() => {\n // ../core/isolate-runtime/src/inject/require-setup.ts\n var __requireExposeCustomGlobal = typeof globalThis.__runtimeExposeCustomGlobal === \"function\" ? globalThis.__runtimeExposeCustomGlobal : function exposeCustomGlobal(name2, value) {\n Object.defineProperty(globalThis, name2, {\n value,\n writable: false,\n configurable: false,\n enumerable: true\n });\n };\n if (typeof globalThis.AbortController === \"undefined\" || typeof globalThis.AbortSignal === \"undefined\") {\n class AbortSignal {\n constructor() {\n this.aborted = false;\n this.reason = void 0;\n this.onabort = null;\n this._listeners = [];\n }\n addEventListener(type, listener) {\n if (type !== \"abort\" || typeof listener !== \"function\") return;\n this._listeners.push(listener);\n }\n removeEventListener(type, listener) {\n if (type !== \"abort\" || typeof listener !== \"function\") return;\n const index = this._listeners.indexOf(listener);\n if (index !== -1) {\n this._listeners.splice(index, 1);\n }\n }\n dispatchEvent(event) {\n if (!event || event.type !== \"abort\") return false;\n if (typeof this.onabort === \"function\") {\n try {\n this.onabort.call(this, event);\n } catch {\n }\n }\n const listeners = this._listeners.slice();\n for (const listener of listeners) {\n try {\n listener.call(this, event);\n } catch {\n }\n }\n return true;\n }\n }\n class AbortController {\n constructor() {\n this.signal = new AbortSignal();\n }\n abort(reason) {\n if (this.signal.aborted) return;\n this.signal.aborted = true;\n this.signal.reason = reason;\n this.signal.dispatchEvent({ type: \"abort\" });\n }\n }\n __requireExposeCustomGlobal(\"AbortSignal\", AbortSignal);\n __requireExposeCustomGlobal(\"AbortController\", AbortController);\n }\n if (typeof globalThis.structuredClone !== \"function\") {\n let structuredClonePolyfill = function(value) {\n if (value === null || typeof value !== \"object\") {\n return value;\n }\n if (value instanceof ArrayBuffer) {\n return value.slice(0);\n }\n if (ArrayBuffer.isView(value)) {\n if (value instanceof Uint8Array) {\n return new Uint8Array(value);\n }\n return new value.constructor(value);\n }\n return JSON.parse(JSON.stringify(value));\n };\n structuredClonePolyfill2 = structuredClonePolyfill;\n __requireExposeCustomGlobal(\"structuredClone\", structuredClonePolyfill);\n }\n var structuredClonePolyfill2;\n if (typeof globalThis.SharedArrayBuffer === \"undefined\") {\n globalThis.SharedArrayBuffer = ArrayBuffer;\n __requireExposeCustomGlobal(\"SharedArrayBuffer\", ArrayBuffer);\n }\n if (typeof globalThis.btoa !== \"function\") {\n __requireExposeCustomGlobal(\"btoa\", function btoa(input) {\n return Buffer.from(String(input), \"binary\").toString(\"base64\");\n });\n }\n if (typeof globalThis.atob !== \"function\") {\n __requireExposeCustomGlobal(\"atob\", function atob(input) {\n return Buffer.from(String(input), \"base64\").toString(\"binary\");\n });\n }\n function _dirname(p) {\n const lastSlash = p.lastIndexOf(\"/\");\n if (lastSlash === -1) return \".\";\n if (lastSlash === 0) return \"/\";\n return p.slice(0, lastSlash);\n }\n if (typeof globalThis.TextDecoder === \"function\") {\n _OrigTextDecoder = globalThis.TextDecoder;\n _utf8Aliases = {\n \"utf-8\": true,\n \"utf8\": true,\n \"unicode-1-1-utf-8\": true,\n \"ascii\": true,\n \"us-ascii\": true,\n \"iso-8859-1\": true,\n \"latin1\": true,\n \"binary\": true,\n \"windows-1252\": true,\n \"utf-16le\": true,\n \"utf-16\": true,\n \"ucs-2\": true,\n \"ucs2\": true\n };\n globalThis.TextDecoder = function TextDecoder(encoding, options) {\n var label = encoding !== void 0 ? String(encoding).toLowerCase().replace(/\\s/g, \"\") : \"utf-8\";\n if (_utf8Aliases[label]) {\n return new _OrigTextDecoder(\"utf-8\", options);\n }\n return new _OrigTextDecoder(encoding, options);\n };\n globalThis.TextDecoder.prototype = _OrigTextDecoder.prototype;\n }\n var _OrigTextDecoder;\n var _utf8Aliases;\n function _patchPolyfill(name2, result2) {\n if (typeof result2 !== \"object\" && typeof result2 !== \"function\" || result2 === null) {\n return result2;\n }\n if (name2 === \"buffer\") {\n const maxLength = typeof result2.kMaxLength === \"number\" ? result2.kMaxLength : 2147483647;\n const maxStringLength = typeof result2.kStringMaxLength === \"number\" ? result2.kStringMaxLength : 536870888;\n if (typeof result2.constants !== \"object\" || result2.constants === null) {\n result2.constants = {};\n }\n if (typeof result2.constants.MAX_LENGTH !== \"number\") {\n result2.constants.MAX_LENGTH = maxLength;\n }\n if (typeof result2.constants.MAX_STRING_LENGTH !== \"number\") {\n result2.constants.MAX_STRING_LENGTH = maxStringLength;\n }\n if (typeof result2.kMaxLength !== \"number\") {\n result2.kMaxLength = maxLength;\n }\n if (typeof result2.kStringMaxLength !== \"number\") {\n result2.kStringMaxLength = maxStringLength;\n }\n const BufferCtor = result2.Buffer;\n if ((typeof BufferCtor === \"function\" || typeof BufferCtor === \"object\") && BufferCtor !== null) {\n if (typeof BufferCtor.kMaxLength !== \"number\") {\n BufferCtor.kMaxLength = maxLength;\n }\n if (typeof BufferCtor.kStringMaxLength !== \"number\") {\n BufferCtor.kStringMaxLength = maxStringLength;\n }\n if (typeof BufferCtor.constants !== \"object\" || BufferCtor.constants === null) {\n BufferCtor.constants = result2.constants;\n }\n var proto = BufferCtor.prototype;\n if (proto && typeof proto.utf8Slice !== \"function\") {\n var encodings = [\"utf8\", \"latin1\", \"ascii\", \"hex\", \"base64\", \"ucs2\", \"utf16le\"];\n for (var ei = 0; ei < encodings.length; ei++) {\n var enc = encodings[ei];\n (function(e) {\n if (typeof proto[e + \"Slice\"] !== \"function\") {\n proto[e + \"Slice\"] = function(start, end) {\n return this.toString(e, start, end);\n };\n }\n if (typeof proto[e + \"Write\"] !== \"function\") {\n proto[e + \"Write\"] = function(string, offset, length) {\n return this.write(string, offset, length, e);\n };\n }\n })(enc);\n }\n }\n }\n return result2;\n }\n if (name2 === \"util\" && typeof result2.formatWithOptions === \"undefined\" && typeof result2.format === \"function\") {\n result2.formatWithOptions = function formatWithOptions(inspectOptions, ...args) {\n return result2.format.apply(null, args);\n };\n }\n if (name2 === \"util\") {\n return result2;\n }\n if (name2 === \"url\") {\n const OriginalURL = result2.URL;\n if (typeof OriginalURL !== \"function\" || OriginalURL._patched) {\n return result2;\n }\n const PatchedURL = function PatchedURL2(url, base) {\n if (typeof url === \"string\" && url.startsWith(\"file:\") && !url.startsWith(\"file://\") && base === void 0) {\n if (typeof process !== \"undefined\" && typeof process.cwd === \"function\") {\n const cwd = process.cwd();\n if (cwd) {\n try {\n return new OriginalURL(url, \"file://\" + cwd + \"/\");\n } catch (e) {\n }\n }\n }\n }\n return base !== void 0 ? new OriginalURL(url, base) : new OriginalURL(url);\n };\n Object.keys(OriginalURL).forEach(function(key) {\n try {\n PatchedURL[key] = OriginalURL[key];\n } catch {\n }\n });\n Object.setPrototypeOf(PatchedURL, OriginalURL);\n PatchedURL.prototype = OriginalURL.prototype;\n PatchedURL._patched = true;\n const descriptor = Object.getOwnPropertyDescriptor(result2, \"URL\");\n if (descriptor && descriptor.configurable !== true && descriptor.writable !== true && typeof descriptor.set !== \"function\") {\n return result2;\n }\n try {\n result2.URL = PatchedURL;\n } catch {\n try {\n Object.defineProperty(result2, \"URL\", {\n value: PatchedURL,\n writable: true,\n configurable: true,\n enumerable: descriptor?.enumerable ?? true\n });\n } catch {\n }\n }\n return result2;\n }\n if (name2 === \"zlib\") {\n if (typeof result2.constants !== \"object\" || result2.constants === null) {\n var zlibConstants = {};\n var constKeys = Object.keys(result2);\n for (var ci = 0; ci < constKeys.length; ci++) {\n var ck = constKeys[ci];\n if (ck.indexOf(\"Z_\") === 0 && typeof result2[ck] === \"number\") {\n zlibConstants[ck] = result2[ck];\n }\n }\n if (typeof zlibConstants.DEFLATE !== \"number\") zlibConstants.DEFLATE = 1;\n if (typeof zlibConstants.INFLATE !== \"number\") zlibConstants.INFLATE = 2;\n if (typeof zlibConstants.GZIP !== \"number\") zlibConstants.GZIP = 3;\n if (typeof zlibConstants.DEFLATERAW !== \"number\") zlibConstants.DEFLATERAW = 4;\n if (typeof zlibConstants.INFLATERAW !== \"number\") zlibConstants.INFLATERAW = 5;\n if (typeof zlibConstants.UNZIP !== \"number\") zlibConstants.UNZIP = 6;\n if (typeof zlibConstants.GUNZIP !== \"number\") zlibConstants.GUNZIP = 7;\n result2.constants = zlibConstants;\n }\n return result2;\n }\n if (name2 === \"crypto\") {\n let createCryptoRangeError2 = function(name3, message) {\n var error = new RangeError(message);\n error.code = \"ERR_OUT_OF_RANGE\";\n error.name = \"RangeError\";\n return error;\n }, createCryptoError2 = function(code, message) {\n var error = new Error(message);\n error.code = code;\n return error;\n }, encodeCryptoResult2 = function(buffer, encoding) {\n if (!encoding || encoding === \"buffer\") return buffer;\n return buffer.toString(encoding);\n }, isSharedArrayBufferInstance2 = function(value) {\n return typeof SharedArrayBuffer !== \"undefined\" && value instanceof SharedArrayBuffer;\n }, isBinaryLike2 = function(value) {\n return Buffer.isBuffer(value) || ArrayBuffer.isView(value) || value instanceof ArrayBuffer || isSharedArrayBufferInstance2(value);\n }, normalizeByteSource2 = function(value, name3, options) {\n var allowNull = options && options.allowNull;\n if (allowNull && value === null) {\n return null;\n }\n if (typeof value === \"string\") {\n return Buffer.from(value, \"utf8\");\n }\n if (Buffer.isBuffer(value)) {\n return Buffer.from(value);\n }\n if (ArrayBuffer.isView(value)) {\n return Buffer.from(value.buffer, value.byteOffset, value.byteLength);\n }\n if (value instanceof ArrayBuffer || isSharedArrayBufferInstance2(value)) {\n return Buffer.from(value);\n }\n throw createInvalidArgTypeError(\n name3,\n \"of type string or an instance of ArrayBuffer, Buffer, TypedArray, or DataView\",\n value\n );\n }, serializeCipherBridgeOptions2 = function(options) {\n if (!options) {\n return \"\";\n }\n var serialized = {};\n if (options.authTagLength !== void 0) {\n serialized.authTagLength = options.authTagLength;\n }\n if (options.authTag) {\n serialized.authTag = options.authTag.toString(\"base64\");\n }\n if (options.aad) {\n serialized.aad = options.aad.toString(\"base64\");\n }\n if (options.aadOptions !== void 0) {\n serialized.aadOptions = options.aadOptions;\n }\n if (options.autoPadding !== void 0) {\n serialized.autoPadding = options.autoPadding;\n }\n if (options.validateOnly !== void 0) {\n serialized.validateOnly = options.validateOnly;\n }\n return JSON.stringify(serialized);\n };\n var createCryptoRangeError = createCryptoRangeError2, createCryptoError = createCryptoError2, encodeCryptoResult = encodeCryptoResult2, isSharedArrayBufferInstance = isSharedArrayBufferInstance2, isBinaryLike = isBinaryLike2, normalizeByteSource = normalizeByteSource2, serializeCipherBridgeOptions = serializeCipherBridgeOptions2;\n var _runtimeRequire = globalThis.require;\n var _streamModule = _runtimeRequire && _runtimeRequire(\"stream\");\n var _utilModule = _runtimeRequire && _runtimeRequire(\"util\");\n var _Transform = _streamModule && _streamModule.Transform;\n var _inherits = _utilModule && _utilModule.inherits;\n if (typeof _cryptoHashDigest !== \"undefined\") {\n let SandboxHash2 = function(algorithm, options) {\n if (!(this instanceof SandboxHash2)) {\n return new SandboxHash2(algorithm, options);\n }\n if (!_Transform || !_inherits) {\n throw new Error(\"stream.Transform is required for crypto.Hash\");\n }\n if (typeof algorithm !== \"string\") {\n throw createInvalidArgTypeError(\"algorithm\", \"of type string\", algorithm);\n }\n _Transform.call(this, options);\n this._algorithm = algorithm;\n this._chunks = [];\n this._finalized = false;\n this._cachedDigest = null;\n this._allowCachedDigest = false;\n };\n var SandboxHash = SandboxHash2;\n _inherits(SandboxHash2, _Transform);\n SandboxHash2.prototype.update = function update(data, inputEncoding) {\n if (this._finalized) {\n throw createCryptoError2(\"ERR_CRYPTO_HASH_FINALIZED\", \"Digest already called\");\n }\n if (typeof data === \"string\") {\n this._chunks.push(Buffer.from(data, inputEncoding || \"utf8\"));\n } else if (isBinaryLike2(data)) {\n this._chunks.push(Buffer.from(data));\n } else {\n throw createInvalidArgTypeError(\n \"data\",\n \"one of type string, Buffer, TypedArray, or DataView\",\n data\n );\n }\n return this;\n };\n SandboxHash2.prototype._finishDigest = function _finishDigest() {\n if (this._cachedDigest) {\n return this._cachedDigest;\n }\n var combined = Buffer.concat(this._chunks);\n var resultBase64 = _cryptoHashDigest.applySync(void 0, [\n this._algorithm,\n combined.toString(\"base64\")\n ]);\n this._cachedDigest = Buffer.from(resultBase64, \"base64\");\n this._finalized = true;\n return this._cachedDigest;\n };\n SandboxHash2.prototype.digest = function digest(encoding) {\n if (this._finalized && !this._allowCachedDigest) {\n throw createCryptoError2(\"ERR_CRYPTO_HASH_FINALIZED\", \"Digest already called\");\n }\n var resultBuffer = this._finishDigest();\n this._allowCachedDigest = false;\n return encodeCryptoResult2(resultBuffer, encoding);\n };\n SandboxHash2.prototype.copy = function copy() {\n if (this._finalized) {\n throw createCryptoError2(\"ERR_CRYPTO_HASH_FINALIZED\", \"Digest already called\");\n }\n var c = new SandboxHash2(this._algorithm);\n c._chunks = this._chunks.slice();\n return c;\n };\n SandboxHash2.prototype._transform = function _transform(chunk, encoding, callback) {\n try {\n this.update(chunk, encoding === \"buffer\" ? void 0 : encoding);\n callback();\n } catch (error) {\n callback(normalizeCryptoBridgeError(error));\n }\n };\n SandboxHash2.prototype._flush = function _flush(callback) {\n try {\n var output = this._finishDigest();\n this._allowCachedDigest = true;\n this.push(output);\n callback();\n } catch (error) {\n callback(normalizeCryptoBridgeError(error));\n }\n };\n result2.createHash = function createHash(algorithm, options) {\n return new SandboxHash2(algorithm, options);\n };\n result2.Hash = SandboxHash2;\n }\n if (typeof _cryptoHmacDigest !== \"undefined\") {\n let SandboxHmac2 = function(algorithm, key) {\n this._algorithm = algorithm;\n if (typeof key === \"string\") {\n this._key = Buffer.from(key, \"utf8\");\n } else if (key && typeof key === \"object\" && key._raw !== void 0) {\n this._key = Buffer.from(key._raw, \"base64\");\n } else if (key && typeof key === \"object\" && key._pem !== void 0) {\n this._key = Buffer.from(key._pem, \"utf8\");\n } else {\n this._key = Buffer.from(key);\n }\n this._chunks = [];\n };\n var SandboxHmac = SandboxHmac2;\n SandboxHmac2.prototype.update = function update(data, inputEncoding) {\n if (typeof data === \"string\") {\n this._chunks.push(Buffer.from(data, inputEncoding || \"utf8\"));\n } else {\n this._chunks.push(Buffer.from(data));\n }\n return this;\n };\n SandboxHmac2.prototype.digest = function digest(encoding) {\n var combined = Buffer.concat(this._chunks);\n var resultBase64 = _cryptoHmacDigest.applySync(void 0, [\n this._algorithm,\n this._key.toString(\"base64\"),\n combined.toString(\"base64\")\n ]);\n var resultBuffer = Buffer.from(resultBase64, \"base64\");\n if (!encoding || encoding === \"buffer\") return resultBuffer;\n return resultBuffer.toString(encoding);\n };\n SandboxHmac2.prototype.copy = function copy() {\n var c = new SandboxHmac2(this._algorithm, this._key);\n c._chunks = this._chunks.slice();\n return c;\n };\n SandboxHmac2.prototype.write = function write(data, encoding) {\n this.update(data, encoding);\n return true;\n };\n SandboxHmac2.prototype.end = function end(data, encoding) {\n if (data) this.update(data, encoding);\n };\n result2.createHmac = function createHmac(algorithm, key) {\n return new SandboxHmac2(algorithm, key);\n };\n result2.Hmac = SandboxHmac2;\n }\n if (typeof _cryptoRandomFill !== \"undefined\") {\n result2.randomBytes = function randomBytes(size, callback) {\n if (typeof size !== \"number\" || size < 0 || size !== (size | 0)) {\n var err = new TypeError('The \"size\" argument must be of type number. Received type ' + typeof size);\n if (typeof callback === \"function\") {\n callback(err);\n return;\n }\n throw err;\n }\n if (size > 2147483647) {\n var rangeErr = new RangeError('The value of \"size\" is out of range. It must be >= 0 && <= 2147483647. Received ' + size);\n if (typeof callback === \"function\") {\n callback(rangeErr);\n return;\n }\n throw rangeErr;\n }\n var buf = Buffer.alloc(size);\n var offset = 0;\n while (offset < size) {\n var chunk = Math.min(size - offset, 65536);\n var base64 = _cryptoRandomFill.applySync(void 0, [chunk]);\n var hostBytes = Buffer.from(base64, \"base64\");\n hostBytes.copy(buf, offset);\n offset += chunk;\n }\n if (typeof callback === \"function\") {\n callback(null, buf);\n return;\n }\n return buf;\n };\n result2.randomFillSync = function randomFillSync(buffer, offset, size) {\n if (offset === void 0) offset = 0;\n var byteLength = buffer.byteLength !== void 0 ? buffer.byteLength : buffer.length;\n if (size === void 0) size = byteLength - offset;\n if (offset < 0 || size < 0 || offset + size > byteLength) {\n throw new RangeError('The value of \"offset + size\" is out of range.');\n }\n var bytes = new Uint8Array(buffer.buffer || buffer, buffer.byteOffset ? buffer.byteOffset + offset : offset, size);\n var filled = 0;\n while (filled < size) {\n var chunk = Math.min(size - filled, 65536);\n var base64 = _cryptoRandomFill.applySync(void 0, [chunk]);\n var hostBytes = Buffer.from(base64, \"base64\");\n bytes.set(hostBytes, filled);\n filled += chunk;\n }\n return buffer;\n };\n result2.randomFill = function randomFill(buffer, offsetOrCb, sizeOrCb, callback) {\n var offset = 0;\n var size;\n var cb;\n if (typeof offsetOrCb === \"function\") {\n cb = offsetOrCb;\n } else if (typeof sizeOrCb === \"function\") {\n offset = offsetOrCb || 0;\n cb = sizeOrCb;\n } else {\n offset = offsetOrCb || 0;\n size = sizeOrCb;\n cb = callback;\n }\n if (typeof cb !== \"function\") {\n throw new TypeError(\"Callback must be a function\");\n }\n try {\n result2.randomFillSync(buffer, offset, size);\n cb(null, buffer);\n } catch (e) {\n cb(e);\n }\n };\n result2.randomInt = function randomInt(minOrMax, maxOrCb, callback) {\n var min, max, cb;\n if (typeof maxOrCb === \"function\" || maxOrCb === void 0) {\n min = 0;\n max = minOrMax;\n cb = maxOrCb;\n } else {\n min = minOrMax;\n max = maxOrCb;\n cb = callback;\n }\n if (!Number.isSafeInteger(min)) {\n var minErr = new TypeError('The \"min\" argument must be a safe integer');\n if (typeof cb === \"function\") {\n cb(minErr);\n return;\n }\n throw minErr;\n }\n if (!Number.isSafeInteger(max)) {\n var maxErr = new TypeError('The \"max\" argument must be a safe integer');\n if (typeof cb === \"function\") {\n cb(maxErr);\n return;\n }\n throw maxErr;\n }\n if (max <= min) {\n var rangeErr2 = new RangeError('The value of \"max\" is out of range. It must be greater than the value of \"min\" (' + min + \")\");\n if (typeof cb === \"function\") {\n cb(rangeErr2);\n return;\n }\n throw rangeErr2;\n }\n var range = max - min;\n var bytes = 6;\n var maxValid = Math.pow(2, 48) - Math.pow(2, 48) % range;\n var val;\n do {\n var base64 = _cryptoRandomFill.applySync(void 0, [bytes]);\n var buf = Buffer.from(base64, \"base64\");\n val = buf.readUIntBE(0, bytes);\n } while (val >= maxValid);\n var result22 = min + val % range;\n if (typeof cb === \"function\") {\n cb(null, result22);\n return;\n }\n return result22;\n };\n }\n if (typeof _cryptoPbkdf2 !== \"undefined\") {\n let createPbkdf2ArgTypeError2 = function(name3, value) {\n var received;\n if (value == null) {\n received = \" Received \" + value;\n } else if (typeof value === \"object\") {\n received = value.constructor && value.constructor.name ? \" Received an instance of \" + value.constructor.name : \" Received [object Object]\";\n } else {\n var inspected = typeof value === \"string\" ? \"'\" + value + \"'\" : String(value);\n received = \" Received type \" + typeof value + \" (\" + inspected + \")\";\n }\n var error = new TypeError('The \"' + name3 + '\" argument must be of type number.' + received);\n error.code = \"ERR_INVALID_ARG_TYPE\";\n return error;\n }, validatePbkdf2Args2 = function(password, salt, iterations, keylen, digest) {\n var pwBuf = normalizeByteSource2(password, \"password\");\n var saltBuf = normalizeByteSource2(salt, \"salt\");\n if (typeof iterations !== \"number\") {\n throw createPbkdf2ArgTypeError2(\"iterations\", iterations);\n }\n if (!Number.isInteger(iterations)) {\n throw createCryptoRangeError2(\n \"iterations\",\n 'The value of \"iterations\" is out of range. It must be an integer. Received ' + iterations\n );\n }\n if (iterations < 1 || iterations > 2147483647) {\n throw createCryptoRangeError2(\n \"iterations\",\n 'The value of \"iterations\" is out of range. It must be >= 1 && <= 2147483647. Received ' + iterations\n );\n }\n if (typeof keylen !== \"number\") {\n throw createPbkdf2ArgTypeError2(\"keylen\", keylen);\n }\n if (!Number.isInteger(keylen)) {\n throw createCryptoRangeError2(\n \"keylen\",\n 'The value of \"keylen\" is out of range. It must be an integer. Received ' + keylen\n );\n }\n if (keylen < 0 || keylen > 2147483647) {\n throw createCryptoRangeError2(\n \"keylen\",\n 'The value of \"keylen\" is out of range. It must be >= 0 && <= 2147483647. Received ' + keylen\n );\n }\n if (typeof digest !== \"string\") {\n throw createInvalidArgTypeError(\"digest\", \"of type string\", digest);\n }\n return {\n password: pwBuf,\n salt: saltBuf\n };\n };\n var createPbkdf2ArgTypeError = createPbkdf2ArgTypeError2, validatePbkdf2Args = validatePbkdf2Args2;\n result2.pbkdf2Sync = function pbkdf2Sync(password, salt, iterations, keylen, digest) {\n var normalized = validatePbkdf2Args2(password, salt, iterations, keylen, digest);\n try {\n var resultBase64 = _cryptoPbkdf2.applySync(void 0, [\n normalized.password.toString(\"base64\"),\n normalized.salt.toString(\"base64\"),\n iterations,\n keylen,\n digest\n ]);\n return Buffer.from(resultBase64, \"base64\");\n } catch (error) {\n throw normalizeCryptoBridgeError(error);\n }\n };\n result2.pbkdf2 = function pbkdf2(password, salt, iterations, keylen, digest, callback) {\n if (typeof digest === \"function\" && callback === void 0) {\n callback = digest;\n digest = void 0;\n }\n if (typeof callback !== \"function\") {\n throw createInvalidArgTypeError(\"callback\", \"of type function\", callback);\n }\n try {\n var derived = result2.pbkdf2Sync(password, salt, iterations, keylen, digest);\n scheduleCryptoCallback(callback, [null, derived]);\n } catch (e) {\n throw normalizeCryptoBridgeError(e);\n }\n };\n }\n if (typeof _cryptoScrypt !== \"undefined\") {\n result2.scryptSync = function scryptSync(password, salt, keylen, options) {\n var pwBuf = typeof password === \"string\" ? Buffer.from(password, \"utf8\") : Buffer.from(password);\n var saltBuf = typeof salt === \"string\" ? Buffer.from(salt, \"utf8\") : Buffer.from(salt);\n var opts = {};\n if (options) {\n if (options.N !== void 0) opts.N = options.N;\n if (options.r !== void 0) opts.r = options.r;\n if (options.p !== void 0) opts.p = options.p;\n if (options.maxmem !== void 0) opts.maxmem = options.maxmem;\n if (options.cost !== void 0) opts.N = options.cost;\n if (options.blockSize !== void 0) opts.r = options.blockSize;\n if (options.parallelization !== void 0) opts.p = options.parallelization;\n }\n var resultBase64 = _cryptoScrypt.applySync(void 0, [\n pwBuf.toString(\"base64\"),\n saltBuf.toString(\"base64\"),\n keylen,\n JSON.stringify(opts)\n ]);\n return Buffer.from(resultBase64, \"base64\");\n };\n result2.scrypt = function scrypt(password, salt, keylen, optionsOrCb, callback) {\n var opts = optionsOrCb;\n var cb = callback;\n if (typeof optionsOrCb === \"function\") {\n opts = void 0;\n cb = optionsOrCb;\n }\n try {\n var derived = result2.scryptSync(password, salt, keylen, opts);\n cb(null, derived);\n } catch (e) {\n cb(e);\n }\n };\n }\n if (typeof _cryptoCipheriv !== \"undefined\") {\n let SandboxCipher2 = function(algorithm, key, iv, options) {\n if (!(this instanceof SandboxCipher2)) {\n return new SandboxCipher2(algorithm, key, iv, options);\n }\n if (typeof algorithm !== \"string\") {\n throw createInvalidArgTypeError(\"cipher\", \"of type string\", algorithm);\n }\n _Transform.call(this);\n this._algorithm = algorithm;\n this._key = normalizeByteSource2(key, \"key\");\n this._iv = normalizeByteSource2(iv, \"iv\", { allowNull: true });\n this._options = options || void 0;\n this._authTag = null;\n this._finalized = false;\n this._sessionCreated = false;\n this._sessionId = void 0;\n this._aad = null;\n this._aadOptions = void 0;\n this._autoPadding = void 0;\n this._chunks = [];\n this._bufferedMode = !_useSessionCipher || !!options;\n if (!this._bufferedMode) {\n this._ensureSession();\n } else if (!options) {\n _cryptoCipheriv.applySync(void 0, [\n this._algorithm,\n this._key.toString(\"base64\"),\n this._iv === null ? null : this._iv.toString(\"base64\"),\n \"\",\n serializeCipherBridgeOptions2({ validateOnly: true })\n ]);\n }\n };\n var SandboxCipher = SandboxCipher2;\n var _useSessionCipher = typeof _cryptoCipherivCreate !== \"undefined\";\n _inherits(SandboxCipher2, _Transform);\n SandboxCipher2.prototype._ensureSession = function _ensureSession() {\n if (this._bufferedMode || this._sessionCreated) {\n return;\n }\n this._sessionCreated = true;\n this._sessionId = _cryptoCipherivCreate.applySync(void 0, [\n \"cipher\",\n this._algorithm,\n this._key.toString(\"base64\"),\n this._iv === null ? null : this._iv.toString(\"base64\"),\n serializeCipherBridgeOptions2(this._getBridgeOptions())\n ]);\n };\n SandboxCipher2.prototype._getBridgeOptions = function _getBridgeOptions() {\n var options = {};\n if (this._options && this._options.authTagLength !== void 0) {\n options.authTagLength = this._options.authTagLength;\n }\n if (this._aad) {\n options.aad = this._aad;\n }\n if (this._aadOptions !== void 0) {\n options.aadOptions = this._aadOptions;\n }\n if (this._autoPadding !== void 0) {\n options.autoPadding = this._autoPadding;\n }\n return Object.keys(options).length === 0 ? null : options;\n };\n SandboxCipher2.prototype.update = function update(data, inputEncoding, outputEncoding) {\n if (this._finalized) {\n throw new Error(\"Attempting to call update() after final()\");\n }\n var buf;\n if (typeof data === \"string\") {\n buf = Buffer.from(data, inputEncoding || \"utf8\");\n } else {\n buf = normalizeByteSource2(data, \"data\");\n }\n if (!this._bufferedMode) {\n this._ensureSession();\n var resultBase64 = _cryptoCipherivUpdate.applySync(void 0, [this._sessionId, buf.toString(\"base64\")]);\n var resultBuffer = Buffer.from(resultBase64, \"base64\");\n return encodeCryptoResult2(resultBuffer, outputEncoding);\n }\n this._chunks.push(buf);\n return encodeCryptoResult2(Buffer.alloc(0), outputEncoding);\n };\n SandboxCipher2.prototype.final = function final(outputEncoding) {\n if (this._finalized) throw new Error(\"Attempting to call final() after already finalized\");\n this._finalized = true;\n var parsed;\n if (!this._bufferedMode) {\n this._ensureSession();\n var resultJson = _cryptoCipherivFinal.applySync(void 0, [this._sessionId]);\n parsed = JSON.parse(resultJson);\n } else {\n var combined = Buffer.concat(this._chunks);\n var resultJson2 = _cryptoCipheriv.applySync(void 0, [\n this._algorithm,\n this._key.toString(\"base64\"),\n this._iv === null ? null : this._iv.toString(\"base64\"),\n combined.toString(\"base64\"),\n serializeCipherBridgeOptions2(this._getBridgeOptions())\n ]);\n parsed = JSON.parse(resultJson2);\n }\n if (parsed.authTag) {\n this._authTag = Buffer.from(parsed.authTag, \"base64\");\n }\n var resultBuffer = Buffer.from(parsed.data, \"base64\");\n return encodeCryptoResult2(resultBuffer, outputEncoding);\n };\n SandboxCipher2.prototype.getAuthTag = function getAuthTag() {\n if (!this._finalized) throw new Error(\"Cannot call getAuthTag before final()\");\n if (!this._authTag) throw new Error(\"Auth tag is not available\");\n return this._authTag;\n };\n SandboxCipher2.prototype.setAAD = function setAAD(aad, options) {\n this._bufferedMode = true;\n this._aad = normalizeByteSource2(aad, \"buffer\");\n this._aadOptions = options;\n return this;\n };\n SandboxCipher2.prototype.setAutoPadding = function setAutoPadding(autoPadding) {\n this._bufferedMode = true;\n this._autoPadding = autoPadding !== false;\n return this;\n };\n SandboxCipher2.prototype._transform = function _transform(chunk, encoding, callback) {\n try {\n var output = this.update(chunk, encoding === \"buffer\" ? void 0 : encoding);\n if (output.length) {\n this.push(output);\n }\n callback();\n } catch (error) {\n callback(normalizeCryptoBridgeError(error));\n }\n };\n SandboxCipher2.prototype._flush = function _flush(callback) {\n try {\n var output = this.final();\n if (output.length) {\n this.push(output);\n }\n callback();\n } catch (error) {\n callback(normalizeCryptoBridgeError(error));\n }\n };\n result2.createCipheriv = function createCipheriv(algorithm, key, iv, options) {\n return new SandboxCipher2(algorithm, key, iv, options);\n };\n result2.Cipheriv = SandboxCipher2;\n }\n if (typeof _cryptoDecipheriv !== \"undefined\") {\n let SandboxDecipher2 = function(algorithm, key, iv, options) {\n if (!(this instanceof SandboxDecipher2)) {\n return new SandboxDecipher2(algorithm, key, iv, options);\n }\n if (typeof algorithm !== \"string\") {\n throw createInvalidArgTypeError(\"cipher\", \"of type string\", algorithm);\n }\n _Transform.call(this);\n this._algorithm = algorithm;\n this._key = normalizeByteSource2(key, \"key\");\n this._iv = normalizeByteSource2(iv, \"iv\", { allowNull: true });\n this._options = options || void 0;\n this._authTag = null;\n this._finalized = false;\n this._sessionCreated = false;\n this._aad = null;\n this._aadOptions = void 0;\n this._autoPadding = void 0;\n this._chunks = [];\n this._bufferedMode = !_useSessionCipher || !!options;\n if (!this._bufferedMode) {\n this._ensureSession();\n } else if (!options) {\n _cryptoDecipheriv.applySync(void 0, [\n this._algorithm,\n this._key.toString(\"base64\"),\n this._iv === null ? null : this._iv.toString(\"base64\"),\n \"\",\n serializeCipherBridgeOptions2({ validateOnly: true })\n ]);\n }\n };\n var SandboxDecipher = SandboxDecipher2;\n _inherits(SandboxDecipher2, _Transform);\n SandboxDecipher2.prototype._ensureSession = function _ensureSession() {\n if (!this._bufferedMode && !this._sessionCreated) {\n this._sessionCreated = true;\n this._sessionId = _cryptoCipherivCreate.applySync(void 0, [\n \"decipher\",\n this._algorithm,\n this._key.toString(\"base64\"),\n this._iv === null ? null : this._iv.toString(\"base64\"),\n serializeCipherBridgeOptions2(this._getBridgeOptions())\n ]);\n }\n };\n SandboxDecipher2.prototype._getBridgeOptions = function _getBridgeOptions() {\n var options = {};\n if (this._options && this._options.authTagLength !== void 0) {\n options.authTagLength = this._options.authTagLength;\n }\n if (this._authTag) {\n options.authTag = this._authTag;\n }\n if (this._aad) {\n options.aad = this._aad;\n }\n if (this._aadOptions !== void 0) {\n options.aadOptions = this._aadOptions;\n }\n if (this._autoPadding !== void 0) {\n options.autoPadding = this._autoPadding;\n }\n return Object.keys(options).length === 0 ? null : options;\n };\n SandboxDecipher2.prototype.update = function update(data, inputEncoding, outputEncoding) {\n if (this._finalized) {\n throw new Error(\"Attempting to call update() after final()\");\n }\n var buf;\n if (typeof data === \"string\") {\n buf = Buffer.from(data, inputEncoding || \"utf8\");\n } else {\n buf = normalizeByteSource2(data, \"data\");\n }\n if (!this._bufferedMode) {\n this._ensureSession();\n var resultBase64 = _cryptoCipherivUpdate.applySync(void 0, [this._sessionId, buf.toString(\"base64\")]);\n var resultBuffer = Buffer.from(resultBase64, \"base64\");\n return encodeCryptoResult2(resultBuffer, outputEncoding);\n }\n this._chunks.push(buf);\n return encodeCryptoResult2(Buffer.alloc(0), outputEncoding);\n };\n SandboxDecipher2.prototype.final = function final(outputEncoding) {\n if (this._finalized) throw new Error(\"Attempting to call final() after already finalized\");\n this._finalized = true;\n var resultBuffer;\n if (!this._bufferedMode) {\n this._ensureSession();\n var resultJson = _cryptoCipherivFinal.applySync(void 0, [this._sessionId]);\n var parsed = JSON.parse(resultJson);\n resultBuffer = Buffer.from(parsed.data, \"base64\");\n } else {\n var combined = Buffer.concat(this._chunks);\n var options = {};\n var resultBase64 = _cryptoDecipheriv.applySync(void 0, [\n this._algorithm,\n this._key.toString(\"base64\"),\n this._iv === null ? null : this._iv.toString(\"base64\"),\n combined.toString(\"base64\"),\n serializeCipherBridgeOptions2(this._getBridgeOptions())\n ]);\n resultBuffer = Buffer.from(resultBase64, \"base64\");\n }\n return encodeCryptoResult2(resultBuffer, outputEncoding);\n };\n SandboxDecipher2.prototype.setAuthTag = function setAuthTag(tag) {\n this._bufferedMode = true;\n this._authTag = typeof tag === \"string\" ? Buffer.from(tag, \"base64\") : normalizeByteSource2(tag, \"buffer\");\n return this;\n };\n SandboxDecipher2.prototype.setAAD = function setAAD(aad, options) {\n this._bufferedMode = true;\n this._aad = normalizeByteSource2(aad, \"buffer\");\n this._aadOptions = options;\n return this;\n };\n SandboxDecipher2.prototype.setAutoPadding = function setAutoPadding(autoPadding) {\n this._bufferedMode = true;\n this._autoPadding = autoPadding !== false;\n return this;\n };\n SandboxDecipher2.prototype._transform = function _transform(chunk, encoding, callback) {\n try {\n var output = this.update(chunk, encoding === \"buffer\" ? void 0 : encoding);\n if (output.length) {\n this.push(output);\n }\n callback();\n } catch (error) {\n callback(normalizeCryptoBridgeError(error));\n }\n };\n SandboxDecipher2.prototype._flush = function _flush(callback) {\n try {\n var output = this.final();\n if (output.length) {\n this.push(output);\n }\n callback();\n } catch (error) {\n callback(normalizeCryptoBridgeError(error));\n }\n };\n result2.createDecipheriv = function createDecipheriv(algorithm, key, iv, options) {\n return new SandboxDecipher2(algorithm, key, iv, options);\n };\n result2.Decipheriv = SandboxDecipher2;\n }\n if (typeof _cryptoSign !== \"undefined\") {\n result2.sign = function sign(algorithm, data, key) {\n var dataBuf = typeof data === \"string\" ? Buffer.from(data, \"utf8\") : Buffer.from(data);\n var sigBase64;\n try {\n sigBase64 = _cryptoSign.applySync(void 0, [\n algorithm === void 0 ? null : algorithm,\n dataBuf.toString(\"base64\"),\n JSON.stringify(serializeBridgeValue(key))\n ]);\n } catch (error) {\n throw normalizeCryptoBridgeError(error);\n }\n return Buffer.from(sigBase64, \"base64\");\n };\n }\n if (typeof _cryptoVerify !== \"undefined\") {\n result2.verify = function verify(algorithm, data, key, signature) {\n var dataBuf = typeof data === \"string\" ? Buffer.from(data, \"utf8\") : Buffer.from(data);\n var sigBuf = typeof signature === \"string\" ? Buffer.from(signature, \"base64\") : Buffer.from(signature);\n try {\n return _cryptoVerify.applySync(void 0, [\n algorithm === void 0 ? null : algorithm,\n dataBuf.toString(\"base64\"),\n JSON.stringify(serializeBridgeValue(key)),\n sigBuf.toString(\"base64\")\n ]);\n } catch (error) {\n throw normalizeCryptoBridgeError(error);\n }\n };\n }\n if (typeof _cryptoAsymmetricOp !== \"undefined\") {\n let asymmetricBridgeCall2 = function(operation, key, data) {\n var dataBuf = toRawBuffer(data);\n var resultBase64;\n try {\n resultBase64 = _cryptoAsymmetricOp.applySync(void 0, [\n operation,\n JSON.stringify(serializeBridgeValue(key)),\n dataBuf.toString(\"base64\")\n ]);\n } catch (error) {\n throw normalizeCryptoBridgeError(error);\n }\n return Buffer.from(resultBase64, \"base64\");\n };\n var asymmetricBridgeCall = asymmetricBridgeCall2;\n result2.publicEncrypt = function publicEncrypt(key, data) {\n return asymmetricBridgeCall2(\"publicEncrypt\", key, data);\n };\n result2.privateDecrypt = function privateDecrypt(key, data) {\n return asymmetricBridgeCall2(\"privateDecrypt\", key, data);\n };\n result2.privateEncrypt = function privateEncrypt(key, data) {\n return asymmetricBridgeCall2(\"privateEncrypt\", key, data);\n };\n result2.publicDecrypt = function publicDecrypt(key, data) {\n return asymmetricBridgeCall2(\"publicDecrypt\", key, data);\n };\n }\n if (typeof _cryptoDiffieHellmanSessionCreate !== \"undefined\" && typeof _cryptoDiffieHellmanSessionCall !== \"undefined\") {\n let serializeDhKeyObject2 = function(value) {\n if (value.type === \"secret\") {\n return {\n type: \"secret\",\n raw: Buffer.from(value.export()).toString(\"base64\")\n };\n }\n return {\n type: value.type,\n pem: value._pem || value.export({\n type: value.type === \"private\" ? \"pkcs8\" : \"spki\",\n format: \"pem\"\n })\n };\n }, serializeDhValue2 = function(value) {\n if (value === null || typeof value === \"string\" || typeof value === \"number\" || typeof value === \"boolean\") {\n return value;\n }\n if (Buffer.isBuffer(value)) {\n return {\n __type: \"buffer\",\n value: Buffer.from(value).toString(\"base64\")\n };\n }\n if (value instanceof ArrayBuffer) {\n return {\n __type: \"buffer\",\n value: Buffer.from(new Uint8Array(value)).toString(\"base64\")\n };\n }\n if (ArrayBuffer.isView(value)) {\n return {\n __type: \"buffer\",\n value: Buffer.from(value.buffer, value.byteOffset, value.byteLength).toString(\"base64\")\n };\n }\n if (typeof value === \"bigint\") {\n return {\n __type: \"bigint\",\n value: value.toString()\n };\n }\n if (value && typeof value === \"object\" && (value.type === \"public\" || value.type === \"private\" || value.type === \"secret\") && typeof value.export === \"function\") {\n return {\n __type: \"keyObject\",\n value: serializeDhKeyObject2(value)\n };\n }\n if (Array.isArray(value)) {\n return value.map(serializeDhValue2);\n }\n if (value && typeof value === \"object\") {\n var output = {};\n var keys = Object.keys(value);\n for (var i = 0; i < keys.length; i++) {\n if (value[keys[i]] !== void 0) {\n output[keys[i]] = serializeDhValue2(value[keys[i]]);\n }\n }\n return output;\n }\n return String(value);\n }, restoreDhValue2 = function(value) {\n if (!value || typeof value !== \"object\") {\n return value;\n }\n if (value.__type === \"buffer\") {\n return Buffer.from(value.value, \"base64\");\n }\n if (value.__type === \"bigint\") {\n return BigInt(value.value);\n }\n if (Array.isArray(value)) {\n return value.map(restoreDhValue2);\n }\n var output = {};\n var keys = Object.keys(value);\n for (var i = 0; i < keys.length; i++) {\n output[keys[i]] = restoreDhValue2(value[keys[i]]);\n }\n return output;\n }, createDhSession2 = function(type, name3, argsLike) {\n var args = [];\n for (var i = 0; i < argsLike.length; i++) {\n args.push(serializeDhValue2(argsLike[i]));\n }\n return _cryptoDiffieHellmanSessionCreate.applySync(void 0, [\n JSON.stringify({\n type,\n name: name3,\n args\n })\n ]);\n }, callDhSession2 = function(sessionId, method, argsLike) {\n var args = [];\n for (var i = 0; i < argsLike.length; i++) {\n args.push(serializeDhValue2(argsLike[i]));\n }\n var response = JSON.parse(_cryptoDiffieHellmanSessionCall.applySync(void 0, [\n sessionId,\n JSON.stringify({\n method,\n args\n })\n ]));\n if (response && response.hasResult === false) {\n return void 0;\n }\n return restoreDhValue2(response && response.result);\n }, SandboxDiffieHellman2 = function(sessionId) {\n this._sessionId = sessionId;\n }, SandboxECDH2 = function(sessionId) {\n SandboxDiffieHellman2.call(this, sessionId);\n };\n var serializeDhKeyObject = serializeDhKeyObject2, serializeDhValue = serializeDhValue2, restoreDhValue = restoreDhValue2, createDhSession = createDhSession2, callDhSession = callDhSession2, SandboxDiffieHellman = SandboxDiffieHellman2, SandboxECDH = SandboxECDH2;\n Object.defineProperty(SandboxDiffieHellman2.prototype, \"verifyError\", {\n get: function getVerifyError() {\n return callDhSession2(this._sessionId, \"verifyError\", []);\n }\n });\n SandboxDiffieHellman2.prototype.generateKeys = function generateKeys(encoding) {\n if (arguments.length === 0) return callDhSession2(this._sessionId, \"generateKeys\", []);\n return callDhSession2(this._sessionId, \"generateKeys\", [encoding]);\n };\n SandboxDiffieHellman2.prototype.computeSecret = function computeSecret(key, inputEncoding, outputEncoding) {\n return callDhSession2(this._sessionId, \"computeSecret\", Array.prototype.slice.call(arguments));\n };\n SandboxDiffieHellman2.prototype.getPrime = function getPrime(encoding) {\n if (arguments.length === 0) return callDhSession2(this._sessionId, \"getPrime\", []);\n return callDhSession2(this._sessionId, \"getPrime\", [encoding]);\n };\n SandboxDiffieHellman2.prototype.getGenerator = function getGenerator(encoding) {\n if (arguments.length === 0) return callDhSession2(this._sessionId, \"getGenerator\", []);\n return callDhSession2(this._sessionId, \"getGenerator\", [encoding]);\n };\n SandboxDiffieHellman2.prototype.getPublicKey = function getPublicKey(encoding) {\n if (arguments.length === 0) return callDhSession2(this._sessionId, \"getPublicKey\", []);\n return callDhSession2(this._sessionId, \"getPublicKey\", [encoding]);\n };\n SandboxDiffieHellman2.prototype.getPrivateKey = function getPrivateKey(encoding) {\n if (arguments.length === 0) return callDhSession2(this._sessionId, \"getPrivateKey\", []);\n return callDhSession2(this._sessionId, \"getPrivateKey\", [encoding]);\n };\n SandboxDiffieHellman2.prototype.setPublicKey = function setPublicKey(key, encoding) {\n return callDhSession2(this._sessionId, \"setPublicKey\", Array.prototype.slice.call(arguments));\n };\n SandboxDiffieHellman2.prototype.setPrivateKey = function setPrivateKey(key, encoding) {\n return callDhSession2(this._sessionId, \"setPrivateKey\", Array.prototype.slice.call(arguments));\n };\n SandboxECDH2.prototype = Object.create(SandboxDiffieHellman2.prototype);\n SandboxECDH2.prototype.constructor = SandboxECDH2;\n SandboxECDH2.prototype.getPublicKey = function getPublicKey(encoding, format) {\n return callDhSession2(this._sessionId, \"getPublicKey\", Array.prototype.slice.call(arguments));\n };\n result2.createDiffieHellman = function createDiffieHellman() {\n return new SandboxDiffieHellman2(createDhSession2(\"dh\", void 0, arguments));\n };\n result2.getDiffieHellman = function getDiffieHellman(name3) {\n return new SandboxDiffieHellman2(createDhSession2(\"group\", name3, []));\n };\n result2.createDiffieHellmanGroup = result2.getDiffieHellman;\n result2.createECDH = function createECDH(curve) {\n return new SandboxECDH2(createDhSession2(\"ecdh\", curve, []));\n };\n if (typeof _cryptoDiffieHellman !== \"undefined\") {\n result2.diffieHellman = function diffieHellman(options) {\n var resultJson = _cryptoDiffieHellman.applySync(void 0, [\n JSON.stringify(serializeDhValue2(options))\n ]);\n return restoreDhValue2(JSON.parse(resultJson));\n };\n }\n result2.DiffieHellman = SandboxDiffieHellman2;\n result2.DiffieHellmanGroup = SandboxDiffieHellman2;\n result2.ECDH = SandboxECDH2;\n }\n if (typeof _cryptoGenerateKeyPairSync !== \"undefined\") {\n let restoreBridgeValue2 = function(value) {\n if (!value || typeof value !== \"object\") {\n return value;\n }\n if (value.__type === \"buffer\") {\n return Buffer.from(value.value, \"base64\");\n }\n if (value.__type === \"bigint\") {\n return BigInt(value.value);\n }\n if (Array.isArray(value)) {\n return value.map(restoreBridgeValue2);\n }\n var output = {};\n var keys = Object.keys(value);\n for (var i = 0; i < keys.length; i++) {\n output[keys[i]] = restoreBridgeValue2(value[keys[i]]);\n }\n return output;\n }, cloneObject2 = function(value) {\n if (!value || typeof value !== \"object\") {\n return value;\n }\n if (Array.isArray(value)) {\n return value.map(cloneObject2);\n }\n var output = {};\n var keys = Object.keys(value);\n for (var i = 0; i < keys.length; i++) {\n output[keys[i]] = cloneObject2(value[keys[i]]);\n }\n return output;\n }, createDomException2 = function(message, name3) {\n if (typeof DOMException === \"function\") {\n return new DOMException(message, name3);\n }\n var error = new Error(message);\n error.name = name3;\n return error;\n }, toRawBuffer2 = function(data, encoding) {\n if (Buffer.isBuffer(data)) {\n return Buffer.from(data);\n }\n if (data instanceof ArrayBuffer) {\n return Buffer.from(new Uint8Array(data));\n }\n if (ArrayBuffer.isView(data)) {\n return Buffer.from(data.buffer, data.byteOffset, data.byteLength);\n }\n if (typeof data === \"string\") {\n return Buffer.from(data, encoding || \"utf8\");\n }\n return Buffer.from(data);\n }, serializeBridgeValue2 = function(value) {\n if (value === null) {\n return null;\n }\n if (typeof value === \"string\" || typeof value === \"number\" || typeof value === \"boolean\") {\n return value;\n }\n if (typeof value === \"bigint\") {\n return {\n __type: \"bigint\",\n value: value.toString()\n };\n }\n if (Buffer.isBuffer(value)) {\n return {\n __type: \"buffer\",\n value: Buffer.from(value).toString(\"base64\")\n };\n }\n if (value instanceof ArrayBuffer) {\n return {\n __type: \"buffer\",\n value: Buffer.from(new Uint8Array(value)).toString(\"base64\")\n };\n }\n if (ArrayBuffer.isView(value)) {\n return {\n __type: \"buffer\",\n value: Buffer.from(value.buffer, value.byteOffset, value.byteLength).toString(\"base64\")\n };\n }\n if (Array.isArray(value)) {\n return value.map(serializeBridgeValue2);\n }\n if (value && typeof value === \"object\" && (value.type === \"public\" || value.type === \"private\" || value.type === \"secret\") && typeof value.export === \"function\") {\n if (value.type === \"secret\") {\n return {\n __type: \"keyObject\",\n value: {\n type: \"secret\",\n raw: Buffer.from(value.export()).toString(\"base64\")\n }\n };\n }\n return {\n __type: \"keyObject\",\n value: {\n type: value.type,\n pem: value._pem\n }\n };\n }\n if (value && typeof value === \"object\") {\n var output = {};\n var keys = Object.keys(value);\n for (var i = 0; i < keys.length; i++) {\n var entry = value[keys[i]];\n if (entry !== void 0) {\n output[keys[i]] = serializeBridgeValue2(entry);\n }\n }\n return output;\n }\n return String(value);\n }, normalizeCryptoBridgeError2 = function(error) {\n if (!error || typeof error !== \"object\") {\n return error;\n }\n if (error.code === void 0 && error.message === \"error:07880109:common libcrypto routines::interrupted or cancelled\") {\n error.code = \"ERR_OSSL_CRYPTO_INTERRUPTED_OR_CANCELLED\";\n }\n return error;\n }, deserializeGeneratedKeyValue2 = function(value) {\n if (!value || typeof value !== \"object\") {\n return value;\n }\n if (value.kind === \"string\") {\n return value.value;\n }\n if (value.kind === \"buffer\") {\n return Buffer.from(value.value, \"base64\");\n }\n if (value.kind === \"keyObject\") {\n return createGeneratedKeyObject2(value.value);\n }\n if (value.kind === \"object\") {\n return value.value;\n }\n return value;\n }, serializeBridgeOptions2 = function(options) {\n return JSON.stringify({\n hasOptions: options !== void 0,\n options: options === void 0 ? null : serializeBridgeValue2(options)\n });\n }, createInvalidArgTypeError2 = function(name3, expected, value) {\n var received;\n if (value == null) {\n received = \" Received \" + value;\n } else if (typeof value === \"function\") {\n received = \" Received function \" + (value.name || \"anonymous\");\n } else if (typeof value === \"object\") {\n if (value.constructor && value.constructor.name) {\n received = \" Received an instance of \" + value.constructor.name;\n } else {\n received = \" Received [object Object]\";\n }\n } else {\n var inspected = typeof value === \"string\" ? \"'\" + value + \"'\" : String(value);\n if (inspected.length > 28) {\n inspected = inspected.slice(0, 25) + \"...\";\n }\n received = \" Received type \" + typeof value + \" (\" + inspected + \")\";\n }\n var error = new TypeError('The \"' + name3 + '\" argument must be ' + expected + \".\" + received);\n error.code = \"ERR_INVALID_ARG_TYPE\";\n return error;\n }, scheduleCryptoCallback2 = function(callback, args) {\n var invoke = function() {\n callback.apply(void 0, args);\n };\n if (typeof process !== \"undefined\" && process && typeof process.nextTick === \"function\") {\n process.nextTick(invoke);\n return;\n }\n if (typeof queueMicrotask === \"function\") {\n queueMicrotask(invoke);\n return;\n }\n Promise.resolve().then(invoke);\n }, shouldThrowCryptoValidationError2 = function(error) {\n if (!error || typeof error !== \"object\") {\n return false;\n }\n if (error.name === \"TypeError\" || error.name === \"RangeError\") {\n return true;\n }\n var code = error.code;\n return code === \"ERR_MISSING_OPTION\" || code === \"ERR_CRYPTO_UNKNOWN_DH_GROUP\" || code === \"ERR_OUT_OF_RANGE\" || typeof code === \"string\" && code.indexOf(\"ERR_INVALID_ARG_\") === 0;\n }, ensureCryptoCallback2 = function(callback, syncValidator) {\n if (typeof callback === \"function\") {\n return callback;\n }\n if (typeof syncValidator === \"function\") {\n syncValidator();\n }\n throw createInvalidArgTypeError2(\"callback\", \"of type function\", callback);\n }, SandboxKeyObject2 = function(type, handle) {\n this.type = type;\n this._pem = handle && handle.pem !== void 0 ? handle.pem : void 0;\n this._raw = handle && handle.raw !== void 0 ? handle.raw : void 0;\n this._jwk = handle && handle.jwk !== void 0 ? cloneObject2(handle.jwk) : void 0;\n this.asymmetricKeyType = handle && handle.asymmetricKeyType !== void 0 ? handle.asymmetricKeyType : void 0;\n this.asymmetricKeyDetails = handle && handle.asymmetricKeyDetails !== void 0 ? restoreBridgeValue2(handle.asymmetricKeyDetails) : void 0;\n this.symmetricKeySize = type === \"secret\" && handle && handle.raw !== void 0 ? Buffer.from(handle.raw, \"base64\").byteLength : void 0;\n }, normalizeNamedCurve2 = function(namedCurve) {\n if (!namedCurve) {\n return namedCurve;\n }\n var upper = String(namedCurve).toUpperCase();\n if (upper === \"PRIME256V1\" || upper === \"SECP256R1\") return \"P-256\";\n if (upper === \"SECP384R1\") return \"P-384\";\n if (upper === \"SECP521R1\") return \"P-521\";\n return namedCurve;\n }, normalizeAlgorithmInput2 = function(algorithm) {\n if (typeof algorithm === \"string\") {\n return { name: algorithm };\n }\n return Object.assign({}, algorithm);\n }, createCompatibleCryptoKey2 = function(keyData) {\n var key;\n if (globalThis.CryptoKey && globalThis.CryptoKey.prototype && globalThis.CryptoKey.prototype !== SandboxCryptoKey.prototype) {\n key = Object.create(globalThis.CryptoKey.prototype);\n key.type = keyData.type;\n key.extractable = keyData.extractable;\n key.algorithm = keyData.algorithm;\n key.usages = keyData.usages;\n key._keyData = keyData;\n key._pem = keyData._pem;\n key._jwk = keyData._jwk;\n key._raw = keyData._raw;\n key._sourceKeyObjectData = keyData._sourceKeyObjectData;\n return key;\n }\n return new SandboxCryptoKey(keyData);\n }, buildCryptoKeyFromKeyObject2 = function(keyObject, algorithm, extractable, usages) {\n var algo = normalizeAlgorithmInput2(algorithm);\n var name3 = algo.name;\n if (keyObject.type === \"secret\") {\n var secretBytes = Buffer.from(keyObject._raw || \"\", \"base64\");\n if (name3 === \"PBKDF2\") {\n if (extractable) {\n throw new SyntaxError(\"PBKDF2 keys are not extractable\");\n }\n if (usages.some(function(usage) {\n return usage !== \"deriveBits\" && usage !== \"deriveKey\";\n })) {\n throw new SyntaxError(\"Unsupported key usage for a PBKDF2 key\");\n }\n return createCompatibleCryptoKey2({\n type: \"secret\",\n extractable,\n algorithm: { name: name3 },\n usages: Array.from(usages),\n _raw: keyObject._raw,\n _sourceKeyObjectData: {\n type: \"secret\",\n raw: keyObject._raw\n }\n });\n }\n if (name3 === \"HMAC\") {\n if (!secretBytes.byteLength || algo.length === 0) {\n throw createDomException2(\"Zero-length key is not supported\", \"DataError\");\n }\n if (!usages.length) {\n throw new SyntaxError(\"Usages cannot be empty when importing a secret key.\");\n }\n return createCompatibleCryptoKey2({\n type: \"secret\",\n extractable,\n algorithm: {\n name: name3,\n hash: typeof algo.hash === \"string\" ? { name: algo.hash } : cloneObject2(algo.hash),\n length: secretBytes.byteLength * 8\n },\n usages: Array.from(usages),\n _raw: keyObject._raw,\n _sourceKeyObjectData: {\n type: \"secret\",\n raw: keyObject._raw\n }\n });\n }\n return createCompatibleCryptoKey2({\n type: \"secret\",\n extractable,\n algorithm: {\n name: name3,\n length: secretBytes.byteLength * 8\n },\n usages: Array.from(usages),\n _raw: keyObject._raw,\n _sourceKeyObjectData: {\n type: \"secret\",\n raw: keyObject._raw\n }\n });\n }\n var keyType = String(keyObject.asymmetricKeyType || \"\").toLowerCase();\n var algorithmName = String(name3 || \"\");\n if ((keyType === \"ed25519\" || keyType === \"ed448\" || keyType === \"x25519\" || keyType === \"x448\") && keyType !== algorithmName.toLowerCase()) {\n throw createDomException2(\"Invalid key type\", \"DataError\");\n }\n if (algorithmName === \"ECDH\") {\n if (keyObject.type === \"private\" && !usages.length) {\n throw new SyntaxError(\"Usages cannot be empty when importing a private key.\");\n }\n var actualCurve = normalizeNamedCurve2(\n keyObject.asymmetricKeyDetails && keyObject.asymmetricKeyDetails.namedCurve\n );\n if (algo.namedCurve && actualCurve && normalizeNamedCurve2(algo.namedCurve) !== actualCurve) {\n throw createDomException2(\"Named curve mismatch\", \"DataError\");\n }\n }\n var normalizedAlgo = cloneObject2(algo);\n if (typeof normalizedAlgo.hash === \"string\") {\n normalizedAlgo.hash = { name: normalizedAlgo.hash };\n }\n return createCompatibleCryptoKey2({\n type: keyObject.type,\n extractable,\n algorithm: normalizedAlgo,\n usages: Array.from(usages),\n _pem: keyObject._pem,\n _jwk: cloneObject2(keyObject._jwk),\n _sourceKeyObjectData: {\n type: keyObject.type,\n pem: keyObject._pem,\n jwk: cloneObject2(keyObject._jwk),\n asymmetricKeyType: keyObject.asymmetricKeyType,\n asymmetricKeyDetails: cloneObject2(keyObject.asymmetricKeyDetails)\n }\n });\n }, createAsymmetricKeyObject2 = function(type, key) {\n if (typeof key === \"string\") {\n if (key.indexOf(\"-----BEGIN\") === -1) {\n throw new TypeError(\"error:0900006e:PEM routines:OPENSSL_internal:NO_START_LINE\");\n }\n return new SandboxKeyObject2(type, { pem: key });\n }\n if (key && typeof key === \"object\" && key._pem) {\n return new SandboxKeyObject2(type, {\n pem: key._pem,\n jwk: key._jwk,\n asymmetricKeyType: key.asymmetricKeyType,\n asymmetricKeyDetails: key.asymmetricKeyDetails\n });\n }\n if (key && typeof key === \"object\" && key.key) {\n var keyData = typeof key.key === \"string\" ? key.key : key.key.toString(\"utf8\");\n return new SandboxKeyObject2(type, { pem: keyData });\n }\n if (Buffer.isBuffer(key)) {\n var keyStr = key.toString(\"utf8\");\n if (keyStr.indexOf(\"-----BEGIN\") === -1) {\n throw new TypeError(\"error:0900006e:PEM routines:OPENSSL_internal:NO_START_LINE\");\n }\n return new SandboxKeyObject2(type, { pem: keyStr });\n }\n return new SandboxKeyObject2(type, { pem: String(key) });\n }, createGeneratedKeyObject2 = function(value) {\n return new SandboxKeyObject2(value.type, {\n pem: value.pem,\n raw: value.raw,\n jwk: value.jwk,\n asymmetricKeyType: value.asymmetricKeyType,\n asymmetricKeyDetails: value.asymmetricKeyDetails\n });\n };\n var restoreBridgeValue = restoreBridgeValue2, cloneObject = cloneObject2, createDomException = createDomException2, toRawBuffer = toRawBuffer2, serializeBridgeValue = serializeBridgeValue2, normalizeCryptoBridgeError = normalizeCryptoBridgeError2, deserializeGeneratedKeyValue = deserializeGeneratedKeyValue2, serializeBridgeOptions = serializeBridgeOptions2, createInvalidArgTypeError = createInvalidArgTypeError2, scheduleCryptoCallback = scheduleCryptoCallback2, shouldThrowCryptoValidationError = shouldThrowCryptoValidationError2, ensureCryptoCallback = ensureCryptoCallback2, SandboxKeyObject = SandboxKeyObject2, normalizeNamedCurve = normalizeNamedCurve2, normalizeAlgorithmInput = normalizeAlgorithmInput2, createCompatibleCryptoKey = createCompatibleCryptoKey2, buildCryptoKeyFromKeyObject = buildCryptoKeyFromKeyObject2, createAsymmetricKeyObject = createAsymmetricKeyObject2, createGeneratedKeyObject = createGeneratedKeyObject2;\n Object.defineProperty(SandboxKeyObject2.prototype, Symbol.toStringTag, {\n value: \"KeyObject\",\n configurable: true\n });\n SandboxKeyObject2.prototype.export = function exportKey(options) {\n if (this.type === \"secret\") {\n return Buffer.from(this._raw || \"\", \"base64\");\n }\n if (!options || typeof options !== \"object\") {\n throw new TypeError('The \"options\" argument must be of type object.');\n }\n if (options.format === \"jwk\") {\n return cloneObject2(this._jwk);\n }\n if (options.format === \"der\") {\n var lines = String(this._pem || \"\").split(\"\\n\").filter(function(l) {\n return l && l.indexOf(\"-----\") !== 0;\n });\n return Buffer.from(lines.join(\"\"), \"base64\");\n }\n return this._pem;\n };\n SandboxKeyObject2.prototype.toString = function() {\n return \"[object KeyObject]\";\n };\n SandboxKeyObject2.prototype.equals = function equals(other) {\n if (!(other instanceof SandboxKeyObject2)) {\n return false;\n }\n if (this.type !== other.type) {\n return false;\n }\n if (this.type === \"secret\") {\n return (this._raw || \"\") === (other._raw || \"\");\n }\n return (this._pem || \"\") === (other._pem || \"\") && this.asymmetricKeyType === other.asymmetricKeyType;\n };\n SandboxKeyObject2.prototype.toCryptoKey = function toCryptoKey(algorithm, extractable, usages) {\n return buildCryptoKeyFromKeyObject2(this, algorithm, extractable, Array.from(usages || []));\n };\n result2.generateKeyPairSync = function generateKeyPairSync(type, options) {\n var resultJson = _cryptoGenerateKeyPairSync.applySync(void 0, [\n type,\n serializeBridgeOptions2(options)\n ]);\n var parsed = JSON.parse(resultJson);\n if (parsed.publicKey && parsed.publicKey.kind) {\n return {\n publicKey: deserializeGeneratedKeyValue2(parsed.publicKey),\n privateKey: deserializeGeneratedKeyValue2(parsed.privateKey)\n };\n }\n return {\n publicKey: createGeneratedKeyObject2(parsed.publicKey),\n privateKey: createGeneratedKeyObject2(parsed.privateKey)\n };\n };\n result2.generateKeyPair = function generateKeyPair(type, options, callback) {\n if (typeof options === \"function\") {\n callback = options;\n options = void 0;\n }\n callback = ensureCryptoCallback2(callback, function() {\n result2.generateKeyPairSync(type, options);\n });\n try {\n var pair = result2.generateKeyPairSync(type, options);\n scheduleCryptoCallback2(callback, [null, pair.publicKey, pair.privateKey]);\n } catch (e) {\n if (shouldThrowCryptoValidationError2(e)) {\n throw e;\n }\n scheduleCryptoCallback2(callback, [e]);\n }\n };\n if (typeof _cryptoGenerateKeySync !== \"undefined\") {\n result2.generateKeySync = function generateKeySync(type, options) {\n var resultJson;\n try {\n resultJson = _cryptoGenerateKeySync.applySync(void 0, [\n type,\n serializeBridgeOptions2(options)\n ]);\n } catch (error) {\n throw normalizeCryptoBridgeError2(error);\n }\n return createGeneratedKeyObject2(JSON.parse(resultJson));\n };\n result2.generateKey = function generateKey(type, options, callback) {\n callback = ensureCryptoCallback2(callback, function() {\n result2.generateKeySync(type, options);\n });\n try {\n var key = result2.generateKeySync(type, options);\n scheduleCryptoCallback2(callback, [null, key]);\n } catch (e) {\n if (shouldThrowCryptoValidationError2(e)) {\n throw e;\n }\n scheduleCryptoCallback2(callback, [e]);\n }\n };\n }\n if (typeof _cryptoGeneratePrimeSync !== \"undefined\") {\n result2.generatePrimeSync = function generatePrimeSync(size, options) {\n var resultJson;\n try {\n resultJson = _cryptoGeneratePrimeSync.applySync(void 0, [\n size,\n serializeBridgeOptions2(options)\n ]);\n } catch (error) {\n throw normalizeCryptoBridgeError2(error);\n }\n return restoreBridgeValue2(JSON.parse(resultJson));\n };\n result2.generatePrime = function generatePrime(size, options, callback) {\n if (typeof options === \"function\") {\n callback = options;\n options = void 0;\n }\n callback = ensureCryptoCallback2(callback, function() {\n result2.generatePrimeSync(size, options);\n });\n try {\n var prime = result2.generatePrimeSync(size, options);\n scheduleCryptoCallback2(callback, [null, prime]);\n } catch (e) {\n if (shouldThrowCryptoValidationError2(e)) {\n throw e;\n }\n scheduleCryptoCallback2(callback, [e]);\n }\n };\n }\n result2.createPublicKey = function createPublicKey(key) {\n if (typeof _cryptoCreateKeyObject !== \"undefined\") {\n var resultJson;\n try {\n resultJson = _cryptoCreateKeyObject.applySync(void 0, [\n \"createPublicKey\",\n JSON.stringify(serializeBridgeValue2(key))\n ]);\n } catch (error) {\n throw normalizeCryptoBridgeError2(error);\n }\n return createGeneratedKeyObject2(JSON.parse(resultJson));\n }\n return createAsymmetricKeyObject2(\"public\", key);\n };\n result2.createPrivateKey = function createPrivateKey(key) {\n if (typeof _cryptoCreateKeyObject !== \"undefined\") {\n var resultJson;\n try {\n resultJson = _cryptoCreateKeyObject.applySync(void 0, [\n \"createPrivateKey\",\n JSON.stringify(serializeBridgeValue2(key))\n ]);\n } catch (error) {\n throw normalizeCryptoBridgeError2(error);\n }\n return createGeneratedKeyObject2(JSON.parse(resultJson));\n }\n return createAsymmetricKeyObject2(\"private\", key);\n };\n result2.createSecretKey = function createSecretKey(key, encoding) {\n return new SandboxKeyObject2(\"secret\", {\n raw: toRawBuffer2(key, encoding).toString(\"base64\")\n });\n };\n SandboxKeyObject2.from = function from(key) {\n if (!key || typeof key !== \"object\" || key[Symbol.toStringTag] !== \"CryptoKey\") {\n throw new TypeError('The \"key\" argument must be an instance of CryptoKey.');\n }\n if (key._sourceKeyObjectData && key._sourceKeyObjectData.type === \"secret\") {\n return new SandboxKeyObject2(\"secret\", {\n raw: key._sourceKeyObjectData.raw\n });\n }\n return new SandboxKeyObject2(key.type, {\n pem: key._pem,\n jwk: key._jwk,\n asymmetricKeyType: key._sourceKeyObjectData && key._sourceKeyObjectData.asymmetricKeyType,\n asymmetricKeyDetails: key._sourceKeyObjectData && key._sourceKeyObjectData.asymmetricKeyDetails\n });\n };\n result2.KeyObject = SandboxKeyObject2;\n }\n if (typeof _cryptoSubtle !== \"undefined\") {\n let SandboxCryptoKey2 = function(keyData) {\n this.type = keyData.type;\n this.extractable = keyData.extractable;\n this.algorithm = keyData.algorithm;\n this.usages = keyData.usages;\n this._keyData = keyData;\n this._pem = keyData._pem;\n this._jwk = keyData._jwk;\n this._raw = keyData._raw;\n this._sourceKeyObjectData = keyData._sourceKeyObjectData;\n }, toBase642 = function(data) {\n if (typeof data === \"string\") return Buffer.from(data).toString(\"base64\");\n if (data instanceof ArrayBuffer) return Buffer.from(new Uint8Array(data)).toString(\"base64\");\n if (ArrayBuffer.isView(data)) return Buffer.from(new Uint8Array(data.buffer, data.byteOffset, data.byteLength)).toString(\"base64\");\n return Buffer.from(data).toString(\"base64\");\n }, subtleCall2 = function(reqObj) {\n return _cryptoSubtle.applySync(void 0, [JSON.stringify(reqObj)]);\n }, normalizeAlgo2 = function(algorithm) {\n if (typeof algorithm === \"string\") return { name: algorithm };\n return algorithm;\n };\n var SandboxCryptoKey = SandboxCryptoKey2, toBase64 = toBase642, subtleCall = subtleCall2, normalizeAlgo = normalizeAlgo2;\n Object.defineProperty(SandboxCryptoKey2.prototype, Symbol.toStringTag, {\n value: \"CryptoKey\",\n configurable: true\n });\n Object.defineProperty(SandboxCryptoKey2, Symbol.hasInstance, {\n value: function(candidate) {\n return !!(candidate && typeof candidate === \"object\" && (candidate._keyData || candidate[Symbol.toStringTag] === \"CryptoKey\"));\n },\n configurable: true\n });\n if (globalThis.CryptoKey && globalThis.CryptoKey.prototype && globalThis.CryptoKey.prototype !== SandboxCryptoKey2.prototype) {\n Object.setPrototypeOf(SandboxCryptoKey2.prototype, globalThis.CryptoKey.prototype);\n }\n if (typeof globalThis.CryptoKey === \"undefined\") {\n __requireExposeCustomGlobal(\"CryptoKey\", SandboxCryptoKey2);\n } else if (globalThis.CryptoKey !== SandboxCryptoKey2) {\n globalThis.CryptoKey = SandboxCryptoKey2;\n }\n var SandboxSubtle = {};\n SandboxSubtle.digest = function digest(algorithm, data) {\n return Promise.resolve().then(function() {\n var algo = normalizeAlgo2(algorithm);\n var result22 = JSON.parse(subtleCall2({\n op: \"digest\",\n algorithm: algo.name,\n data: toBase642(data)\n }));\n var buf = Buffer.from(result22.data, \"base64\");\n return buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.byteLength);\n });\n };\n SandboxSubtle.generateKey = function generateKey(algorithm, extractable, keyUsages) {\n return Promise.resolve().then(function() {\n var algo = normalizeAlgo2(algorithm);\n var reqAlgo = Object.assign({}, algo);\n if (reqAlgo.hash) reqAlgo.hash = normalizeAlgo2(reqAlgo.hash);\n if (reqAlgo.publicExponent) {\n reqAlgo.publicExponent = Buffer.from(new Uint8Array(reqAlgo.publicExponent.buffer || reqAlgo.publicExponent)).toString(\"base64\");\n }\n var result22 = JSON.parse(subtleCall2({\n op: \"generateKey\",\n algorithm: reqAlgo,\n extractable,\n usages: Array.from(keyUsages)\n }));\n if (result22.publicKey && result22.privateKey) {\n return {\n publicKey: new SandboxCryptoKey2(result22.publicKey),\n privateKey: new SandboxCryptoKey2(result22.privateKey)\n };\n }\n return new SandboxCryptoKey2(result22.key);\n });\n };\n SandboxSubtle.importKey = function importKey(format, keyData, algorithm, extractable, keyUsages) {\n return Promise.resolve().then(function() {\n var algo = normalizeAlgo2(algorithm);\n var reqAlgo = Object.assign({}, algo);\n if (reqAlgo.hash) reqAlgo.hash = normalizeAlgo2(reqAlgo.hash);\n var serializedKeyData;\n if (format === \"jwk\") {\n serializedKeyData = keyData;\n } else if (format === \"raw\") {\n serializedKeyData = toBase642(keyData);\n } else {\n serializedKeyData = toBase642(keyData);\n }\n var result22 = JSON.parse(subtleCall2({\n op: \"importKey\",\n format,\n keyData: serializedKeyData,\n algorithm: reqAlgo,\n extractable,\n usages: Array.from(keyUsages)\n }));\n return new SandboxCryptoKey2(result22.key);\n });\n };\n SandboxSubtle.exportKey = function exportKey(format, key) {\n return Promise.resolve().then(function() {\n var result22 = JSON.parse(subtleCall2({\n op: \"exportKey\",\n format,\n key: key._keyData\n }));\n if (format === \"jwk\") return result22.jwk;\n var buf = Buffer.from(result22.data, \"base64\");\n return buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.byteLength);\n });\n };\n SandboxSubtle.encrypt = function encrypt(algorithm, key, data) {\n return Promise.resolve().then(function() {\n var algo = normalizeAlgo2(algorithm);\n var reqAlgo = Object.assign({}, algo);\n if (reqAlgo.iv) reqAlgo.iv = toBase642(reqAlgo.iv);\n if (reqAlgo.additionalData) reqAlgo.additionalData = toBase642(reqAlgo.additionalData);\n var result22 = JSON.parse(subtleCall2({\n op: \"encrypt\",\n algorithm: reqAlgo,\n key: key._keyData,\n data: toBase642(data)\n }));\n var buf = Buffer.from(result22.data, \"base64\");\n return buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.byteLength);\n });\n };\n SandboxSubtle.decrypt = function decrypt(algorithm, key, data) {\n return Promise.resolve().then(function() {\n var algo = normalizeAlgo2(algorithm);\n var reqAlgo = Object.assign({}, algo);\n if (reqAlgo.iv) reqAlgo.iv = toBase642(reqAlgo.iv);\n if (reqAlgo.additionalData) reqAlgo.additionalData = toBase642(reqAlgo.additionalData);\n var result22 = JSON.parse(subtleCall2({\n op: \"decrypt\",\n algorithm: reqAlgo,\n key: key._keyData,\n data: toBase642(data)\n }));\n var buf = Buffer.from(result22.data, \"base64\");\n return buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.byteLength);\n });\n };\n SandboxSubtle.sign = function sign(algorithm, key, data) {\n return Promise.resolve().then(function() {\n var result22 = JSON.parse(subtleCall2({\n op: \"sign\",\n algorithm: normalizeAlgo2(algorithm),\n key: key._keyData,\n data: toBase642(data)\n }));\n var buf = Buffer.from(result22.data, \"base64\");\n return buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.byteLength);\n });\n };\n SandboxSubtle.verify = function verify(algorithm, key, signature, data) {\n return Promise.resolve().then(function() {\n var result22 = JSON.parse(subtleCall2({\n op: \"verify\",\n algorithm: normalizeAlgo2(algorithm),\n key: key._keyData,\n signature: toBase642(signature),\n data: toBase642(data)\n }));\n return result22.result;\n });\n };\n SandboxSubtle.deriveBits = function deriveBits(algorithm, baseKey, length) {\n return Promise.resolve().then(function() {\n var algo = normalizeAlgo2(algorithm);\n var reqAlgo = Object.assign({}, algo);\n if (reqAlgo.salt) reqAlgo.salt = toBase642(reqAlgo.salt);\n if (reqAlgo.info) reqAlgo.info = toBase642(reqAlgo.info);\n var result22 = JSON.parse(subtleCall2({\n op: \"deriveBits\",\n algorithm: reqAlgo,\n baseKey: baseKey._keyData,\n length\n }));\n return Buffer.from(result22.data, \"base64\").buffer;\n });\n };\n SandboxSubtle.deriveKey = function deriveKey(algorithm, baseKey, derivedKeyAlgorithm, extractable, keyUsages) {\n return Promise.resolve().then(function() {\n var algo = normalizeAlgo2(algorithm);\n var reqAlgo = Object.assign({}, algo);\n if (reqAlgo.salt) reqAlgo.salt = toBase642(reqAlgo.salt);\n if (reqAlgo.info) reqAlgo.info = toBase642(reqAlgo.info);\n var result22 = JSON.parse(subtleCall2({\n op: \"deriveKey\",\n algorithm: reqAlgo,\n baseKey: baseKey._keyData,\n derivedKeyAlgorithm: normalizeAlgo2(derivedKeyAlgorithm),\n extractable,\n usages: keyUsages\n }));\n return new SandboxCryptoKey2(result22.key);\n });\n };\n if (globalThis.crypto && globalThis.crypto.subtle && typeof globalThis.crypto.subtle.importKey === \"function\") {\n result2.subtle = globalThis.crypto.subtle;\n result2.webcrypto = globalThis.crypto;\n } else {\n result2.subtle = SandboxSubtle;\n result2.webcrypto = { subtle: SandboxSubtle, getRandomValues: result2.randomFillSync };\n }\n }\n if (typeof result2.getCurves !== \"function\") {\n result2.getCurves = function getCurves() {\n return [\n \"prime256v1\",\n \"secp256r1\",\n \"secp384r1\",\n \"secp521r1\",\n \"secp256k1\",\n \"secp224r1\",\n \"secp192k1\"\n ];\n };\n }\n if (typeof result2.getCiphers !== \"function\") {\n result2.getCiphers = function getCiphers() {\n return [\n \"aes-128-cbc\",\n \"aes-128-gcm\",\n \"aes-192-cbc\",\n \"aes-192-gcm\",\n \"aes-256-cbc\",\n \"aes-256-gcm\",\n \"aes-128-ctr\",\n \"aes-192-ctr\",\n \"aes-256-ctr\"\n ];\n };\n }\n if (typeof result2.getHashes !== \"function\") {\n result2.getHashes = function getHashes() {\n return [\"md5\", \"sha1\", \"sha256\", \"sha384\", \"sha512\"];\n };\n }\n if (typeof result2.timingSafeEqual !== \"function\") {\n result2.timingSafeEqual = function timingSafeEqual(a, b) {\n if (a.length !== b.length) {\n throw new RangeError(\"Input buffers must have the same byte length\");\n }\n var out = 0;\n for (var i = 0; i < a.length; i++) {\n out |= a[i] ^ b[i];\n }\n return out === 0;\n };\n }\n if (typeof result2.getFips !== \"function\") {\n result2.getFips = function getFips() {\n return 0;\n };\n }\n if (typeof result2.setFips !== \"function\") {\n result2.setFips = function setFips() {\n throw new Error(\"FIPS mode is not supported in sandbox\");\n };\n }\n return result2;\n }\n if (name2 === \"stream\") {\n if (typeof result2 === \"function\" && result2.prototype && typeof result2.Readable === \"function\") {\n var readableProto = result2.Readable.prototype;\n var streamProto = result2.prototype;\n if (readableProto && streamProto && !(readableProto instanceof result2)) {\n var currentParent = Object.getPrototypeOf(readableProto);\n Object.setPrototypeOf(streamProto, currentParent);\n Object.setPrototypeOf(readableProto, streamProto);\n }\n }\n return result2;\n }\n if (name2 === \"path\") {\n if (result2.win32 === null || result2.win32 === void 0) {\n result2.win32 = result2.posix || result2;\n }\n if (result2.posix === null || result2.posix === void 0) {\n result2.posix = result2;\n }\n const hasAbsoluteSegment = function(args) {\n return args.some(function(arg) {\n return typeof arg === \"string\" && arg.length > 0 && arg.charAt(0) === \"/\";\n });\n };\n const prependCwd = function(args) {\n if (hasAbsoluteSegment(args)) return;\n if (typeof process !== \"undefined\" && typeof process.cwd === \"function\") {\n const cwd = process.cwd();\n if (cwd && cwd.charAt(0) === \"/\") {\n args.unshift(cwd);\n }\n }\n };\n const originalResolve = result2.resolve;\n if (typeof originalResolve === \"function\" && !originalResolve._patchedForCwd) {\n const patchedResolve = function resolve2() {\n const args = Array.from(arguments);\n prependCwd(args);\n return originalResolve.apply(this, args);\n };\n patchedResolve._patchedForCwd = true;\n result2.resolve = patchedResolve;\n }\n if (result2.posix && typeof result2.posix.resolve === \"function\" && !result2.posix.resolve._patchedForCwd) {\n const originalPosixResolve = result2.posix.resolve;\n const patchedPosixResolve = function resolve2() {\n const args = Array.from(arguments);\n prependCwd(args);\n return originalPosixResolve.apply(this, args);\n };\n patchedPosixResolve._patchedForCwd = true;\n result2.posix.resolve = patchedPosixResolve;\n }\n }\n return result2;\n }\n var _deferredCoreModules = /* @__PURE__ */ new Set([\n \"readline\",\n \"perf_hooks\",\n \"async_hooks\",\n \"worker_threads\",\n \"diagnostics_channel\"\n ]);\n var _unsupportedCoreModules = /* @__PURE__ */ new Set([\n \"dgram\",\n \"cluster\",\n \"wasi\",\n \"inspector\",\n \"repl\",\n \"trace_events\",\n \"domain\"\n ]);\n function _unsupportedApiError(moduleName2, apiName) {\n return new Error(moduleName2 + \".\" + apiName + \" is not supported in sandbox\");\n }\n function _createDeferredModuleStub(moduleName2) {\n const methodCache = {};\n let stub = null;\n stub = new Proxy({}, {\n get(_target, prop) {\n if (prop === \"__esModule\") return false;\n if (prop === \"default\") return stub;\n if (prop === Symbol.toStringTag) return \"Module\";\n if (prop === \"then\") return void 0;\n if (typeof prop !== \"string\") return void 0;\n if (!methodCache[prop]) {\n methodCache[prop] = function deferredApiStub() {\n throw _unsupportedApiError(moduleName2, prop);\n };\n }\n return methodCache[prop];\n }\n });\n return stub;\n }\n var __internalModuleCache = _moduleCache;\n var __require = function require2(moduleName2) {\n return _requireFrom(moduleName2, _currentModule.dirname);\n };\n __requireExposeCustomGlobal(\"require\", __require);\n function _resolveFrom(moduleName2, fromDir2) {\n var resolved2;\n if (typeof _resolveModuleSync !== \"undefined\") {\n resolved2 = _resolveModuleSync.applySync(void 0, [moduleName2, fromDir2]);\n }\n if (resolved2 === null || resolved2 === void 0) {\n resolved2 = _resolveModule.applySyncPromise(void 0, [moduleName2, fromDir2, \"require\"]);\n }\n if (resolved2 === null) {\n const err = new Error(\"Cannot find module '\" + moduleName2 + \"'\");\n err.code = \"MODULE_NOT_FOUND\";\n throw err;\n }\n return resolved2;\n }\n globalThis.require.resolve = function resolve(moduleName2) {\n return _resolveFrom(moduleName2, _currentModule.dirname);\n };\n function _debugRequire(phase, moduleName2, extra) {\n if (globalThis.__sandboxRequireDebug !== true) {\n return;\n }\n if (moduleName2 !== \"rivetkit\" && moduleName2 !== \"@rivetkit/traces\" && moduleName2 !== \"@rivetkit/on-change\" && moduleName2 !== \"async_hooks\" && !moduleName2.startsWith(\"rivetkit/\") && !moduleName2.startsWith(\"@rivetkit/\")) {\n return;\n }\n if (typeof console !== \"undefined\" && typeof console.log === \"function\") {\n console.log(\n \"[sandbox.require] \" + phase + \" \" + moduleName2 + (extra ? \" \" + extra : \"\")\n );\n }\n }\n function _requireFrom(moduleName, fromDir) {\n _debugRequire(\"start\", moduleName, fromDir);\n const name = moduleName.replace(/^node:/, \"\");\n let cacheKey = name;\n let resolved = null;\n const isRelative = name.startsWith(\"./\") || name.startsWith(\"../\");\n if (!isRelative && __internalModuleCache[name]) {\n _debugRequire(\"cache-hit\", name, name);\n return __internalModuleCache[name];\n }\n if (name === \"fs\") {\n if (__internalModuleCache[\"fs\"]) return __internalModuleCache[\"fs\"];\n const fsModule = globalThis.bridge?.fs || globalThis.bridge?.default || globalThis._fsModule || {};\n __internalModuleCache[\"fs\"] = fsModule;\n _debugRequire(\"loaded\", name, \"fs-special\");\n return fsModule;\n }\n if (name === \"fs/promises\") {\n if (__internalModuleCache[\"fs/promises\"]) return __internalModuleCache[\"fs/promises\"];\n const fsModule = _requireFrom(\"fs\", fromDir);\n __internalModuleCache[\"fs/promises\"] = fsModule.promises;\n _debugRequire(\"loaded\", name, \"fs-promises-special\");\n return fsModule.promises;\n }\n if (name === \"stream/promises\") {\n if (__internalModuleCache[\"stream/promises\"]) return __internalModuleCache[\"stream/promises\"];\n const streamModule = _requireFrom(\"stream\", fromDir);\n const promisesModule = {\n finished(stream, options) {\n return new Promise(function(resolve2, reject) {\n if (typeof streamModule.finished !== \"function\") {\n resolve2();\n return;\n }\n if (options && typeof options === \"object\" && !Array.isArray(options)) {\n streamModule.finished(stream, options, function(error) {\n if (error) {\n reject(error);\n return;\n }\n resolve2();\n });\n return;\n }\n streamModule.finished(stream, function(error) {\n if (error) {\n reject(error);\n return;\n }\n resolve2();\n });\n });\n },\n pipeline() {\n const args = Array.prototype.slice.call(arguments);\n return new Promise(function(resolve2, reject) {\n if (typeof streamModule.pipeline !== \"function\") {\n reject(new Error(\"stream.pipeline is not supported in sandbox\"));\n return;\n }\n args.push(function(error) {\n if (error) {\n reject(error);\n return;\n }\n resolve2();\n });\n streamModule.pipeline.apply(streamModule, args);\n });\n }\n };\n __internalModuleCache[\"stream/promises\"] = promisesModule;\n _debugRequire(\"loaded\", name, \"stream-promises-special\");\n return promisesModule;\n }\n if (name === \"child_process\") {\n if (__internalModuleCache[\"child_process\"]) return __internalModuleCache[\"child_process\"];\n __internalModuleCache[\"child_process\"] = _childProcessModule;\n _debugRequire(\"loaded\", name, \"child-process-special\");\n return _childProcessModule;\n }\n if (name === \"net\") {\n if (__internalModuleCache[\"net\"]) return __internalModuleCache[\"net\"];\n __internalModuleCache[\"net\"] = _netModule;\n _debugRequire(\"loaded\", name, \"net-special\");\n return _netModule;\n }\n if (name === \"tls\") {\n if (__internalModuleCache[\"tls\"]) return __internalModuleCache[\"tls\"];\n __internalModuleCache[\"tls\"] = _tlsModule;\n _debugRequire(\"loaded\", name, \"tls-special\");\n return _tlsModule;\n }\n if (name === \"http\") {\n if (__internalModuleCache[\"http\"]) return __internalModuleCache[\"http\"];\n __internalModuleCache[\"http\"] = _httpModule;\n _debugRequire(\"loaded\", name, \"http-special\");\n return _httpModule;\n }\n if (name === \"_http_agent\") {\n if (__internalModuleCache[\"_http_agent\"]) return __internalModuleCache[\"_http_agent\"];\n const httpAgentModule = {\n Agent: _httpModule.Agent,\n globalAgent: _httpModule.globalAgent\n };\n __internalModuleCache[\"_http_agent\"] = httpAgentModule;\n _debugRequire(\"loaded\", name, \"http-agent-special\");\n return httpAgentModule;\n }\n if (name === \"https\") {\n if (__internalModuleCache[\"https\"]) return __internalModuleCache[\"https\"];\n __internalModuleCache[\"https\"] = _httpsModule;\n _debugRequire(\"loaded\", name, \"https-special\");\n return _httpsModule;\n }\n if (name === \"http2\") {\n if (__internalModuleCache[\"http2\"]) return __internalModuleCache[\"http2\"];\n __internalModuleCache[\"http2\"] = _http2Module;\n _debugRequire(\"loaded\", name, \"http2-special\");\n return _http2Module;\n }\n if (name === \"dns\") {\n if (__internalModuleCache[\"dns\"]) return __internalModuleCache[\"dns\"];\n __internalModuleCache[\"dns\"] = _dnsModule;\n _debugRequire(\"loaded\", name, \"dns-special\");\n return _dnsModule;\n }\n if (name === \"os\") {\n if (__internalModuleCache[\"os\"]) return __internalModuleCache[\"os\"];\n __internalModuleCache[\"os\"] = _osModule;\n _debugRequire(\"loaded\", name, \"os-special\");\n return _osModule;\n }\n if (name === \"module\") {\n if (__internalModuleCache[\"module\"]) return __internalModuleCache[\"module\"];\n __internalModuleCache[\"module\"] = _moduleModule;\n _debugRequire(\"loaded\", name, \"module-special\");\n return _moduleModule;\n }\n if (name === \"process\") {\n _debugRequire(\"loaded\", name, \"process-special\");\n return globalThis.process;\n }\n if (name === \"async_hooks\") {\n if (__internalModuleCache[\"async_hooks\"]) return __internalModuleCache[\"async_hooks\"];\n class AsyncLocalStorage {\n constructor() {\n this._store = void 0;\n }\n run(store, callback) {\n const previousStore = this._store;\n this._store = store;\n try {\n const args = Array.prototype.slice.call(arguments, 2);\n return callback.apply(void 0, args);\n } finally {\n this._store = previousStore;\n }\n }\n enterWith(store) {\n this._store = store;\n }\n getStore() {\n return this._store;\n }\n disable() {\n this._store = void 0;\n }\n exit(callback) {\n const previousStore = this._store;\n this._store = void 0;\n try {\n const args = Array.prototype.slice.call(arguments, 1);\n return callback.apply(void 0, args);\n } finally {\n this._store = previousStore;\n }\n }\n }\n class AsyncResource {\n constructor(type) {\n this.type = type;\n }\n runInAsyncScope(callback, thisArg) {\n const args = Array.prototype.slice.call(arguments, 2);\n return callback.apply(thisArg, args);\n }\n emitDestroy() {\n }\n }\n const asyncHooksModule = {\n AsyncLocalStorage,\n AsyncResource,\n createHook() {\n return {\n enable() {\n return this;\n },\n disable() {\n return this;\n }\n };\n },\n executionAsyncId() {\n return 1;\n },\n triggerAsyncId() {\n return 0;\n },\n executionAsyncResource() {\n return null;\n }\n };\n __internalModuleCache[\"async_hooks\"] = asyncHooksModule;\n _debugRequire(\"loaded\", name, \"async-hooks-special\");\n return asyncHooksModule;\n }\n if (name === \"diagnostics_channel\") {\n let _createChannel2 = function() {\n return {\n hasSubscribers: false,\n publish: function() {\n },\n subscribe: function() {\n },\n unsubscribe: function() {\n }\n };\n };\n var _createChannel = _createChannel2;\n if (__internalModuleCache[name]) return __internalModuleCache[name];\n const dcModule = {\n channel: function() {\n return _createChannel2();\n },\n hasSubscribers: function() {\n return false;\n },\n tracingChannel: function() {\n return {\n start: _createChannel2(),\n end: _createChannel2(),\n asyncStart: _createChannel2(),\n asyncEnd: _createChannel2(),\n error: _createChannel2(),\n traceSync: function(fn, context, thisArg) {\n var args = Array.prototype.slice.call(arguments, 3);\n return fn.apply(thisArg, args);\n },\n tracePromise: function(fn, context, thisArg) {\n var args = Array.prototype.slice.call(arguments, 3);\n return fn.apply(thisArg, args);\n },\n traceCallback: function(fn, context, thisArg) {\n var args = Array.prototype.slice.call(arguments, 3);\n return fn.apply(thisArg, args);\n }\n };\n },\n Channel: function Channel(name2) {\n this.hasSubscribers = false;\n this.publish = function() {\n };\n this.subscribe = function() {\n };\n this.unsubscribe = function() {\n };\n }\n };\n __internalModuleCache[name] = dcModule;\n _debugRequire(\"loaded\", name, \"diagnostics-channel-special\");\n return dcModule;\n }\n if (_deferredCoreModules.has(name)) {\n if (__internalModuleCache[name]) return __internalModuleCache[name];\n const deferredStub = _createDeferredModuleStub(name);\n __internalModuleCache[name] = deferredStub;\n _debugRequire(\"loaded\", name, \"deferred-stub\");\n return deferredStub;\n }\n if (_unsupportedCoreModules.has(name)) {\n throw new Error(name + \" is not supported in sandbox\");\n }\n const polyfillCode = _loadPolyfill.applySyncPromise(void 0, [name]);\n if (polyfillCode !== null) {\n if (__internalModuleCache[name]) return __internalModuleCache[name];\n const moduleObj = { exports: {} };\n _pendingModules[name] = moduleObj;\n let result = eval(polyfillCode);\n result = _patchPolyfill(name, result);\n if (typeof result === \"object\" && result !== null) {\n Object.assign(moduleObj.exports, result);\n } else {\n moduleObj.exports = result;\n }\n __internalModuleCache[name] = moduleObj.exports;\n delete _pendingModules[name];\n _debugRequire(\"loaded\", name, \"polyfill\");\n return __internalModuleCache[name];\n }\n resolved = _resolveFrom(name, fromDir);\n cacheKey = resolved;\n if (__internalModuleCache[cacheKey]) {\n _debugRequire(\"cache-hit\", name, cacheKey);\n return __internalModuleCache[cacheKey];\n }\n if (_pendingModules[cacheKey]) {\n _debugRequire(\"pending-hit\", name, cacheKey);\n return _pendingModules[cacheKey].exports;\n }\n var source;\n if (typeof _loadFileSync !== \"undefined\") {\n source = _loadFileSync.applySync(void 0, [resolved]);\n }\n if (source === null || source === void 0) {\n source = _loadFile.applySyncPromise(void 0, [resolved, \"require\"]);\n }\n if (source === null) {\n const err = new Error(\"Cannot find module '\" + resolved + \"'\");\n err.code = \"MODULE_NOT_FOUND\";\n throw err;\n }\n if (resolved.endsWith(\".json\")) {\n const parsed = JSON.parse(source);\n __internalModuleCache[cacheKey] = parsed;\n return parsed;\n }\n const normalizedSource = typeof source === \"string\" ? source.replace(/import\\.meta\\.url/g, \"__filename\").replace(/fileURLToPath\\(__filename\\)/g, \"__filename\").replace(/url\\.fileURLToPath\\(__filename\\)/g, \"__filename\").replace(/fileURLToPath\\.call\\(void 0, __filename\\)/g, \"__filename\") : source;\n const module = {\n exports: {},\n filename: resolved,\n dirname: _dirname(resolved),\n id: resolved,\n loaded: false\n };\n _pendingModules[cacheKey] = module;\n const prevModule = _currentModule;\n _currentModule = module;\n try {\n let wrapper;\n try {\n wrapper = new Function(\n \"exports\",\n \"require\",\n \"module\",\n \"__filename\",\n \"__dirname\",\n \"__dynamicImport\",\n normalizedSource + \"\\n//# sourceURL=\" + resolved\n );\n } catch (error) {\n const details = error && error.stack ? error.stack : String(error);\n throw new Error(\"failed to compile module \" + resolved + \": \" + details);\n }\n const moduleRequire = function(request) {\n return _requireFrom(request, module.dirname);\n };\n moduleRequire.resolve = function(request) {\n return _resolveFrom(request, module.dirname);\n };\n const moduleDynamicImport = function(specifier) {\n if (typeof globalThis.__dynamicImport === \"function\") {\n return globalThis.__dynamicImport(specifier, module.dirname);\n }\n return Promise.reject(new Error(\"Dynamic import is not initialized\"));\n };\n wrapper(\n module.exports,\n moduleRequire,\n module,\n resolved,\n module.dirname,\n moduleDynamicImport\n );\n module.loaded = true;\n } catch (error) {\n const details = error && error.stack ? error.stack : String(error);\n throw new Error(\"failed to execute module \" + resolved + \": \" + details);\n } finally {\n _currentModule = prevModule;\n }\n __internalModuleCache[cacheKey] = module.exports;\n delete _pendingModules[cacheKey];\n _debugRequire(\"loaded\", name, cacheKey);\n return module.exports;\n }\n __requireExposeCustomGlobal(\"_requireFrom\", _requireFrom);\n var __moduleCacheProxy = new Proxy(__internalModuleCache, {\n get(target, prop, receiver) {\n return Reflect.get(target, prop, receiver);\n },\n set(_target, prop) {\n throw new TypeError(\"Cannot set require.cache['\" + String(prop) + \"']\");\n },\n deleteProperty(_target, prop) {\n throw new TypeError(\"Cannot delete require.cache['\" + String(prop) + \"']\");\n },\n defineProperty(_target, prop) {\n throw new TypeError(\"Cannot define property '\" + String(prop) + \"' on require.cache\");\n },\n has(target, prop) {\n return Reflect.has(target, prop);\n },\n ownKeys(target) {\n return Reflect.ownKeys(target);\n },\n getOwnPropertyDescriptor(target, prop) {\n return Reflect.getOwnPropertyDescriptor(target, prop);\n }\n });\n globalThis.require.cache = __moduleCacheProxy;\n Object.defineProperty(globalThis, \"_moduleCache\", {\n value: __moduleCacheProxy,\n writable: false,\n configurable: true,\n enumerable: false\n });\n if (typeof _moduleModule !== \"undefined\") {\n if (_moduleModule.Module) {\n _moduleModule.Module._cache = __moduleCacheProxy;\n }\n _moduleModule._cache = __moduleCacheProxy;\n }\n})();\n", "setCommonjsFileGlobals": "\"use strict\";\n(() => {\n // ../core/isolate-runtime/src/common/global-exposure.ts\n function defineRuntimeGlobalBinding(name, value, mutable) {\n Object.defineProperty(globalThis, name, {\n value,\n writable: mutable,\n configurable: mutable,\n enumerable: true\n });\n }\n function createRuntimeGlobalExposer(mutable) {\n return (name, value) => {\n defineRuntimeGlobalBinding(name, value, mutable);\n };\n }\n function getRuntimeExposeMutableGlobal() {\n if (typeof globalThis.__runtimeExposeMutableGlobal === \"function\") {\n return globalThis.__runtimeExposeMutableGlobal;\n }\n return createRuntimeGlobalExposer(true);\n }\n\n // ../core/isolate-runtime/src/inject/set-commonjs-file-globals.ts\n var __runtimeExposeMutableGlobal = getRuntimeExposeMutableGlobal();\n var __commonJsFileConfig = globalThis.__runtimeCommonJsFileConfig ?? {};\n var __filePath = typeof __commonJsFileConfig.filePath === \"string\" ? __commonJsFileConfig.filePath : \"/.js\";\n var __dirname = typeof __commonJsFileConfig.dirname === \"string\" ? __commonJsFileConfig.dirname : \"/\";\n __runtimeExposeMutableGlobal(\"__filename\", __filePath);\n __runtimeExposeMutableGlobal(\"__dirname\", __dirname);\n var __currentModule = globalThis._currentModule;\n if (__currentModule) {\n __currentModule.dirname = __dirname;\n __currentModule.filename = __filePath;\n }\n})();\n", "setStdinData": "\"use strict\";\n(() => {\n // ../core/isolate-runtime/src/inject/set-stdin-data.ts\n if (typeof globalThis._stdinData !== \"undefined\") {\n globalThis._stdinData = globalThis.__runtimeStdinData;\n globalThis._stdinPosition = 0;\n globalThis._stdinEnded = false;\n globalThis._stdinFlowMode = false;\n }\n})();\n", "setupDynamicImport": "\"use strict\";\n(() => {\n // ../core/isolate-runtime/src/common/global-access.ts\n function isObjectLike(value) {\n return value !== null && (typeof value === \"object\" || typeof value === \"function\");\n }\n\n // ../core/isolate-runtime/src/common/global-exposure.ts\n function defineRuntimeGlobalBinding(name, value, mutable) {\n Object.defineProperty(globalThis, name, {\n value,\n writable: mutable,\n configurable: mutable,\n enumerable: true\n });\n }\n function createRuntimeGlobalExposer(mutable) {\n return (name, value) => {\n defineRuntimeGlobalBinding(name, value, mutable);\n };\n }\n function getRuntimeExposeCustomGlobal() {\n if (typeof globalThis.__runtimeExposeCustomGlobal === \"function\") {\n return globalThis.__runtimeExposeCustomGlobal;\n }\n return createRuntimeGlobalExposer(false);\n }\n\n // ../core/isolate-runtime/src/inject/setup-dynamic-import.ts\n var __runtimeExposeCustomGlobal = getRuntimeExposeCustomGlobal();\n var __dynamicImportConfig = globalThis.__runtimeDynamicImportConfig ?? {};\n var __fallbackReferrer = typeof __dynamicImportConfig.referrerPath === \"string\" && __dynamicImportConfig.referrerPath.length > 0 ? __dynamicImportConfig.referrerPath : \"/\";\n var __dynamicImportCache = /* @__PURE__ */ new Map();\n var __resolveDynamicImportPath = function(request, referrer) {\n if (!request.startsWith(\"./\") && !request.startsWith(\"../\") && !request.startsWith(\"/\")) {\n return request;\n }\n const baseDir = referrer.endsWith(\"/\") ? referrer : referrer.slice(0, referrer.lastIndexOf(\"/\")) || \"/\";\n const segments = baseDir.split(\"/\").filter(Boolean);\n for (const part of request.split(\"/\")) {\n if (part === \".\" || part.length === 0) continue;\n if (part === \"..\") {\n segments.pop();\n continue;\n }\n segments.push(part);\n }\n return `/${segments.join(\"/\")}`;\n };\n var __dynamicImportHandler = function(specifier, fromPath) {\n const request = String(specifier);\n const referrer = typeof fromPath === \"string\" && fromPath.length > 0 ? fromPath : __fallbackReferrer;\n let resolved = null;\n if (typeof globalThis._resolveModuleSync !== \"undefined\") {\n resolved = globalThis._resolveModuleSync.applySync(\n void 0,\n [request, referrer, \"import\"]\n );\n }\n const resolvedPath = typeof resolved === \"string\" && resolved.length > 0 ? resolved : __resolveDynamicImportPath(request, referrer);\n const cacheKey = typeof resolved === \"string\" && resolved.length > 0 ? resolved : `${referrer}\\0${request}`;\n const cached = __dynamicImportCache.get(cacheKey);\n if (cached) return Promise.resolve(cached);\n if (typeof globalThis._requireFrom !== \"function\") {\n throw new Error(\"Cannot load module: \" + resolvedPath);\n }\n let mod;\n try {\n mod = globalThis._requireFrom(resolved ?? request, referrer);\n } catch (error) {\n const message = error instanceof Error ? error.message : String(error);\n if (error && typeof error === \"object\" && \"code\" in error && error.code === \"MODULE_NOT_FOUND\") {\n throw new Error(\"Cannot load module: \" + resolvedPath);\n }\n if (message.startsWith(\"Cannot find module \")) {\n throw new Error(\"Cannot load module: \" + resolvedPath);\n }\n throw error;\n }\n const namespaceFallback = { default: mod };\n if (isObjectLike(mod)) {\n for (const key of Object.keys(mod)) {\n if (!(key in namespaceFallback)) {\n namespaceFallback[key] = mod[key];\n }\n }\n }\n __dynamicImportCache.set(cacheKey, namespaceFallback);\n return Promise.resolve(namespaceFallback);\n };\n __runtimeExposeCustomGlobal(\"__dynamicImport\", __dynamicImportHandler);\n})();\n", diff --git a/packages/core/src/shared/bridge-contract.ts b/packages/core/src/shared/bridge-contract.ts index c85b3e77..31684146 100644 --- a/packages/core/src/shared/bridge-contract.ts +++ b/packages/core/src/shared/bridge-contract.ts @@ -39,7 +39,15 @@ export const HOST_BRIDGE_GLOBAL_KEYS = { cryptoCipherivFinal: "_cryptoCipherivFinal", cryptoSign: "_cryptoSign", cryptoVerify: "_cryptoVerify", + cryptoAsymmetricOp: "_cryptoAsymmetricOp", + cryptoCreateKeyObject: "_cryptoCreateKeyObject", cryptoGenerateKeyPairSync: "_cryptoGenerateKeyPairSync", + cryptoGenerateKeySync: "_cryptoGenerateKeySync", + cryptoGeneratePrimeSync: "_cryptoGeneratePrimeSync", + cryptoDiffieHellman: "_cryptoDiffieHellman", + cryptoDiffieHellmanGroup: "_cryptoDiffieHellmanGroup", + cryptoDiffieHellmanSessionCreate: "_cryptoDiffieHellmanSessionCreate", + cryptoDiffieHellmanSessionCall: "_cryptoDiffieHellmanSessionCall", cryptoSubtle: "_cryptoSubtle", fsReadFile: "_fsReadFile", fsWriteFile: "_fsWriteFile", @@ -180,15 +188,15 @@ export type CryptoScryptBridgeRef = BridgeApplySyncRef< string >; export type CryptoCipherivBridgeRef = BridgeApplySyncRef< - [string, string, string, string], + [string, string, string | null, string, string?], string >; export type CryptoDecipherivBridgeRef = BridgeApplySyncRef< - [string, string, string, string, string], + [string, string, string | null, string, string], string >; export type CryptoCipherivCreateBridgeRef = BridgeApplySyncRef< - [string, string, string, string, string], + [string, string, string, string | null, string], number >; export type CryptoCipherivUpdateBridgeRef = BridgeApplySyncRef< @@ -200,17 +208,40 @@ export type CryptoCipherivFinalBridgeRef = BridgeApplySyncRef< string >; export type CryptoSignBridgeRef = BridgeApplySyncRef< - [string, string, string], + [string | null, string, string], string >; export type CryptoVerifyBridgeRef = BridgeApplySyncRef< - [string, string, string, string], + [string | null, string, string, string], boolean >; +export type CryptoAsymmetricOpBridgeRef = BridgeApplySyncRef< + [string, string, string], + string +>; +export type CryptoCreateKeyObjectBridgeRef = BridgeApplySyncRef< + [string, string], + string +>; export type CryptoGenerateKeyPairSyncBridgeRef = BridgeApplySyncRef< [string, string], string >; +export type CryptoGenerateKeySyncBridgeRef = BridgeApplySyncRef< + [string, string], + string +>; +export type CryptoGeneratePrimeSyncBridgeRef = BridgeApplySyncRef< + [number, string], + string +>; +export type CryptoDiffieHellmanBridgeRef = BridgeApplySyncRef<[string], string>; +export type CryptoDiffieHellmanGroupBridgeRef = BridgeApplySyncRef<[string], string>; +export type CryptoDiffieHellmanSessionCreateBridgeRef = BridgeApplySyncRef<[string], number>; +export type CryptoDiffieHellmanSessionCallBridgeRef = BridgeApplySyncRef< + [number, string], + string +>; export type CryptoSubtleBridgeRef = BridgeApplySyncRef<[string], string>; // Filesystem boundary contracts. diff --git a/packages/core/src/shared/global-exposure.ts b/packages/core/src/shared/global-exposure.ts index 9148bf36..a85689c1 100644 --- a/packages/core/src/shared/global-exposure.ts +++ b/packages/core/src/shared/global-exposure.ts @@ -243,11 +243,51 @@ export const NODE_CUSTOM_GLOBAL_INVENTORY: readonly CustomGlobalInventoryEntry[] classification: "hardened", rationale: "Host crypto verify bridge reference.", }, + { + name: "_cryptoAsymmetricOp", + classification: "hardened", + rationale: "Host asymmetric crypto operation bridge reference.", + }, + { + name: "_cryptoCreateKeyObject", + classification: "hardened", + rationale: "Host asymmetric key import bridge reference.", + }, { name: "_cryptoGenerateKeyPairSync", classification: "hardened", rationale: "Host crypto key-pair generation bridge reference.", }, + { + name: "_cryptoGenerateKeySync", + classification: "hardened", + rationale: "Host symmetric crypto key generation bridge reference.", + }, + { + name: "_cryptoGeneratePrimeSync", + classification: "hardened", + rationale: "Host prime generation bridge reference.", + }, + { + name: "_cryptoDiffieHellman", + classification: "hardened", + rationale: "Host stateless Diffie-Hellman bridge reference.", + }, + { + name: "_cryptoDiffieHellmanGroup", + classification: "hardened", + rationale: "Host Diffie-Hellman group bridge reference.", + }, + { + name: "_cryptoDiffieHellmanSessionCreate", + classification: "hardened", + rationale: "Host Diffie-Hellman/ECDH session creation bridge reference.", + }, + { + name: "_cryptoDiffieHellmanSessionCall", + classification: "hardened", + rationale: "Host Diffie-Hellman/ECDH session method bridge reference.", + }, { name: "_cryptoSubtle", classification: "hardened", diff --git a/packages/nodejs/src/bridge-contract.ts b/packages/nodejs/src/bridge-contract.ts index 5a7e39e5..6499fdcd 100644 --- a/packages/nodejs/src/bridge-contract.ts +++ b/packages/nodejs/src/bridge-contract.ts @@ -35,7 +35,15 @@ export const HOST_BRIDGE_GLOBAL_KEYS = { cryptoCipherivFinal: "_cryptoCipherivFinal", cryptoSign: "_cryptoSign", cryptoVerify: "_cryptoVerify", + cryptoAsymmetricOp: "_cryptoAsymmetricOp", + cryptoCreateKeyObject: "_cryptoCreateKeyObject", cryptoGenerateKeyPairSync: "_cryptoGenerateKeyPairSync", + cryptoGenerateKeySync: "_cryptoGenerateKeySync", + cryptoGeneratePrimeSync: "_cryptoGeneratePrimeSync", + cryptoDiffieHellman: "_cryptoDiffieHellman", + cryptoDiffieHellmanGroup: "_cryptoDiffieHellmanGroup", + cryptoDiffieHellmanSessionCreate: "_cryptoDiffieHellmanSessionCreate", + cryptoDiffieHellmanSessionCall: "_cryptoDiffieHellmanSessionCall", cryptoSubtle: "_cryptoSubtle", fsReadFile: "_fsReadFile", fsWriteFile: "_fsWriteFile", @@ -185,15 +193,15 @@ export type CryptoScryptBridgeRef = BridgeApplySyncRef< string >; export type CryptoCipherivBridgeRef = BridgeApplySyncRef< - [string, string, string, string], + [string, string, string | null, string, string?], string >; export type CryptoDecipherivBridgeRef = BridgeApplySyncRef< - [string, string, string, string, string], + [string, string, string | null, string, string], string >; export type CryptoCipherivCreateBridgeRef = BridgeApplySyncRef< - [string, string, string, string, string], + [string, string, string, string | null, string], number >; export type CryptoCipherivUpdateBridgeRef = BridgeApplySyncRef< @@ -205,17 +213,40 @@ export type CryptoCipherivFinalBridgeRef = BridgeApplySyncRef< string >; export type CryptoSignBridgeRef = BridgeApplySyncRef< - [string, string, string], + [string | null, string, string], string >; export type CryptoVerifyBridgeRef = BridgeApplySyncRef< - [string, string, string, string], + [string | null, string, string, string], boolean >; +export type CryptoAsymmetricOpBridgeRef = BridgeApplySyncRef< + [string, string, string], + string +>; +export type CryptoCreateKeyObjectBridgeRef = BridgeApplySyncRef< + [string, string], + string +>; export type CryptoGenerateKeyPairSyncBridgeRef = BridgeApplySyncRef< [string, string], string >; +export type CryptoGenerateKeySyncBridgeRef = BridgeApplySyncRef< + [string, string], + string +>; +export type CryptoGeneratePrimeSyncBridgeRef = BridgeApplySyncRef< + [number, string], + string +>; +export type CryptoDiffieHellmanBridgeRef = BridgeApplySyncRef<[string], string>; +export type CryptoDiffieHellmanGroupBridgeRef = BridgeApplySyncRef<[string], string>; +export type CryptoDiffieHellmanSessionCreateBridgeRef = BridgeApplySyncRef<[string], number>; +export type CryptoDiffieHellmanSessionCallBridgeRef = BridgeApplySyncRef< + [number, string], + string +>; export type CryptoSubtleBridgeRef = BridgeApplySyncRef<[string], string>; // Filesystem boundary contracts. diff --git a/packages/nodejs/src/bridge-handlers.ts b/packages/nodejs/src/bridge-handlers.ts index 9142f59e..b434b7a1 100644 --- a/packages/nodejs/src/bridge-handlers.ts +++ b/packages/nodejs/src/bridge-handlers.ts @@ -26,7 +26,20 @@ import { generateKeyPairSync, createPrivateKey, createPublicKey, + createSecretKey, + createDiffieHellman, + getDiffieHellman, + createECDH, + diffieHellman, + generateKeySync, + generatePrimeSync, + publicEncrypt, + privateDecrypt, + privateEncrypt, + publicDecrypt, timingSafeEqual, + constants as cryptoConstants, + KeyObject, type Cipher, type Decipher, } from "node:crypto"; @@ -34,6 +47,8 @@ import { HOST_BRIDGE_GLOBAL_KEYS, } from "./bridge-contract.js"; import { + AF_INET, + SOCK_STREAM, mkdir, FDTableManager, O_RDONLY, @@ -89,12 +104,509 @@ export interface CryptoBridgeResult { dispose: () => void; } +type SerializedKeyValue = + | { + kind: "string"; + value: string; + } + | { + kind: "buffer"; + value: string; + } + | { + kind: "keyObject"; + value: SerializedSandboxKeyObject; + } + | { + kind: "object"; + value: Record; + }; + +interface SerializedSandboxKeyObject { + type: "public" | "private" | "secret"; + pem?: string; + raw?: string; + asymmetricKeyType?: string; + asymmetricKeyDetails?: Record; + jwk?: Record; +} + +type SerializedBridgeValue = + | null + | boolean + | number + | string + | { + __type: "buffer"; + value: string; + } + | { + __type: "bigint"; + value: string; + } + | { + __type: "keyObject"; + value: SerializedSandboxKeyObject; + } + | SerializedBridgeValue[] + | { + [key: string]: SerializedBridgeValue; + }; + /** Stateful cipher/decipher session stored between bridge calls. */ interface CipherSession { cipher: Cipher | Decipher; algorithm: string; } +interface SerializedDispatchError { + message: string; + name?: string; + code?: string; + stack?: string; +} + +type DiffieHellmanSession = + | ReturnType + | ReturnType + | ReturnType; + +function serializeKeyDetails(details: unknown): Record | undefined { + if (!details || typeof details !== "object") { + return undefined; + } + + return Object.fromEntries( + Object.entries(details).map(([key, value]) => [ + key, + typeof value === "bigint" + ? { __type: "bigint", value: value.toString() } + : value, + ]), + ); +} + +function serializeKeyValue(value: unknown): SerializedKeyValue { + if (Buffer.isBuffer(value)) { + return { + kind: "buffer", + value: value.toString("base64"), + }; + } + + if (typeof value === "string") { + return { + kind: "string", + value, + }; + } + + if ( + value && + typeof value === "object" && + "type" in value && + ((value as { type?: unknown }).type === "public" || + (value as { type?: unknown }).type === "private") && + typeof (value as { export?: unknown }).export === "function" + ) { + return { + kind: "keyObject", + value: serializeSandboxKeyObject(value as any), + }; + } + + return { + kind: "object", + value: value as Record, + }; +} + +function exportAsPem(keyObject: ReturnType | ReturnType): string { + return keyObject.type === "private" + ? (keyObject.export({ type: "pkcs8", format: "pem" }) as string) + : (keyObject.export({ type: "spki", format: "pem" }) as string); +} + +function serializeSandboxKeyObject( + keyObject: ReturnType | ReturnType, +): SerializedSandboxKeyObject { + let jwk: Record | undefined; + try { + jwk = keyObject.export({ format: "jwk" }) as Record; + } catch { + jwk = undefined; + } + + return { + type: keyObject.type, + pem: exportAsPem(keyObject), + asymmetricKeyType: keyObject.asymmetricKeyType ?? undefined, + asymmetricKeyDetails: serializeKeyDetails(keyObject.asymmetricKeyDetails), + jwk, + }; +} + +function serializeAnyKeyObject(keyObject: any): SerializedSandboxKeyObject { + if (keyObject.type === "secret") { + return { + type: "secret", + raw: Buffer.from(keyObject.export()).toString("base64"), + }; + } + + return serializeSandboxKeyObject(keyObject); +} + +function serializeBridgeValue(value: unknown): SerializedBridgeValue { + if (value === null || typeof value === "string" || typeof value === "number" || typeof value === "boolean") { + return value; + } + + if (typeof value === "bigint") { + return { + __type: "bigint", + value: value.toString(), + }; + } + + if (Buffer.isBuffer(value)) { + return { + __type: "buffer", + value: value.toString("base64"), + }; + } + + if (value instanceof ArrayBuffer) { + return { + __type: "buffer", + value: Buffer.from(value).toString("base64"), + }; + } + + if (ArrayBuffer.isView(value)) { + return { + __type: "buffer", + value: Buffer.from(value.buffer, value.byteOffset, value.byteLength).toString("base64"), + }; + } + + if (Array.isArray(value)) { + return value.map((entry) => serializeBridgeValue(entry)); + } + + if ( + value && + typeof value === "object" && + "type" in value && + (((value as { type?: unknown }).type === "public" || + (value as { type?: unknown }).type === "private" || + (value as { type?: unknown }).type === "secret")) && + typeof (value as { export?: unknown }).export === "function" + ) { + return { + __type: "keyObject", + value: serializeAnyKeyObject(value as any), + }; + } + + if (value && typeof value === "object") { + return Object.fromEntries( + Object.entries(value).flatMap(([key, entry]) => + entry === undefined ? [] : [[key, serializeBridgeValue(entry)]], + ), + ); + } + + return String(value); +} + +function deserializeSandboxKeyObject(serialized: SerializedSandboxKeyObject): any { + if (serialized.type === "secret") { + return createSecretKey(Buffer.from(serialized.raw || "", "base64")); + } + + if (serialized.type === "private") { + return createPrivateKey(String(serialized.pem || "")); + } + + return createPublicKey(String(serialized.pem || "")); +} + +function deserializeBridgeValue(value: SerializedBridgeValue): unknown { + if (value === null || typeof value === "string" || typeof value === "number" || typeof value === "boolean") { + return value; + } + + if (Array.isArray(value)) { + return value.map((entry) => deserializeBridgeValue(entry)); + } + + if ("__type" in value) { + if (value.__type === "buffer") { + return Buffer.from((value as { value: string }).value, "base64"); + } + if (value.__type === "bigint") { + return BigInt((value as { value: string }).value); + } + if (value.__type === "keyObject") { + return deserializeSandboxKeyObject((value as { value: SerializedSandboxKeyObject }).value); + } + } + + return Object.fromEntries( + Object.entries(value).map(([key, entry]) => [key, deserializeBridgeValue(entry)]), + ); +} + +function parseSerializedOptions( + optionsJson: unknown, +): unknown { + const parsed = JSON.parse(String(optionsJson)) as { + hasOptions?: boolean; + options?: SerializedBridgeValue; + }; + if (!parsed || parsed.hasOptions !== true) { + return undefined; + } + return deserializeBridgeValue(parsed.options ?? null); +} + +function serializeDispatchError(error: unknown): SerializedDispatchError { + if (error instanceof Error) { + const withCode = error as Error & { + code?: unknown; + }; + return { + message: error.message, + name: error.name, + code: typeof withCode.code === "string" ? withCode.code : undefined, + stack: error.stack, + }; + } + + return { + message: String(error), + name: "Error", + }; +} + +function restoreDispatchArgument(value: unknown): unknown { + if (!value || typeof value !== "object") { + return value; + } + + if ( + (value as { __secureExecDispatchType?: unknown }).__secureExecDispatchType === + "undefined" + ) { + return undefined; + } + + if (Array.isArray(value)) { + return value.map((entry) => restoreDispatchArgument(entry)); + } + + return Object.fromEntries( + Object.entries(value).map(([key, entry]) => [key, restoreDispatchArgument(entry)]), + ); +} + +function normalizeBridgeAlgorithm(algorithm: unknown): string | null { + if (algorithm === null || algorithm === undefined || algorithm === "") { + return null; + } + + return String(algorithm); +} + +interface BridgeCryptoKeyData { + type: "public" | "private" | "secret"; + extractable: boolean; + algorithm: Record; + usages: string[]; + _pem?: string; + _jwk?: Record; + _raw?: string; + _sourceKeyObjectData?: Record; +} + +function decodeBridgeBuffer(data: unknown): Buffer { + return Buffer.from(String(data), "base64"); +} + +function sanitizeJsonValue(value: unknown): unknown { + if (typeof value === "bigint") { + return Number(value); + } + if (Array.isArray(value)) { + return value.map((entry) => sanitizeJsonValue(entry)); + } + if (!value || typeof value !== "object") { + return value; + } + return Object.fromEntries( + Object.entries(value as Record).map(([key, entry]) => [ + key, + sanitizeJsonValue(entry), + ]), + ); +} + +function serializeCryptoKeyDataFromKeyObject( + keyObject: KeyObject, + type: "public" | "private" | "secret", + algorithm: Record, + extractable: boolean, + usages: string[], +): BridgeCryptoKeyData { + if (type === "secret") { + return { + type, + algorithm, + extractable, + usages, + _raw: keyObject.export().toString("base64"), + _sourceKeyObjectData: { + type: "secret", + raw: keyObject.export().toString("base64"), + }, + }; + } + + return { + type, + algorithm, + extractable, + usages, + _pem: + type === "private" + ? (keyObject.export({ type: "pkcs8", format: "pem" }) as string) + : (keyObject.export({ type: "spki", format: "pem" }) as string), + _sourceKeyObjectData: { + type, + pem: + type === "private" + ? (keyObject.export({ type: "pkcs8", format: "pem" }) as string) + : (keyObject.export({ type: "spki", format: "pem" }) as string), + asymmetricKeyType: keyObject.asymmetricKeyType, + asymmetricKeyDetails: sanitizeJsonValue(keyObject.asymmetricKeyDetails), + }, + }; +} + +function deserializeCryptoKeyObject(key: BridgeCryptoKeyData): KeyObject { + if (key.type === "secret") { + return createSecretKey(decodeBridgeBuffer(key._raw)); + } + + return key.type === "private" + ? createPrivateKey(key._pem ?? "") + : createPublicKey(key._pem ?? ""); +} + +function normalizeHmacLength(hashName: string, explicitLength?: unknown): number { + if (typeof explicitLength === "number") { + return explicitLength; + } + + switch (hashName) { + case "SHA-1": + case "SHA-256": + return 512; + case "SHA-384": + case "SHA-512": + return 1024; + default: + return 512; + } +} + +function sliceDerivedBits(secret: Buffer, length: unknown): Buffer { + if (length === undefined || length === null) { + return Buffer.from(secret); + } + + const requestedBits = Number(length); + const maxBits = secret.byteLength * 8; + if (requestedBits > maxBits) { + throw new Error("derived bit length is too small"); + } + + const requestedBytes = Math.ceil(requestedBits / 8); + const derived = Buffer.from(secret.subarray(0, requestedBytes)); + const remainder = requestedBits % 8; + if (remainder !== 0 && derived.length > 0) { + derived[derived.length - 1] &= 0xff << (8 - remainder); + } + return derived; +} + +function deriveSecretKeyData( + derivedKeyAlgorithm: Record | string, + extractable: boolean, + usages: string[], + secret: Buffer, +): BridgeCryptoKeyData { + const normalizedAlgorithm = + typeof derivedKeyAlgorithm === "string" + ? { name: derivedKeyAlgorithm } + : derivedKeyAlgorithm; + const algorithmName = String(normalizedAlgorithm.name ?? ""); + if (algorithmName === "HMAC") { + const hashName = + typeof normalizedAlgorithm.hash === "string" + ? normalizedAlgorithm.hash + : String((normalizedAlgorithm.hash as { name?: string } | undefined)?.name ?? ""); + const lengthBits = normalizeHmacLength(hashName, normalizedAlgorithm.length); + const keyBytes = Buffer.from(secret.subarray(0, Math.ceil(lengthBits / 8))); + return serializeCryptoKeyDataFromKeyObject( + createSecretKey(keyBytes), + "secret", + { + name: "HMAC", + hash: { name: hashName }, + length: lengthBits, + }, + extractable, + usages, + ); + } + + const lengthBits = Number(normalizedAlgorithm.length ?? secret.byteLength * 8); + const keyBytes = Buffer.from(secret.subarray(0, Math.ceil(lengthBits / 8))); + return serializeCryptoKeyDataFromKeyObject( + createSecretKey(keyBytes), + "secret", + { + ...normalizedAlgorithm, + length: lengthBits, + }, + extractable, + usages, + ); +} + +function resolveDerivedKeyLengthBits( + derivedKeyAlgorithm: Record | string, + fallbackBits: number, +): number { + const normalizedAlgorithm = + typeof derivedKeyAlgorithm === "string" + ? { name: derivedKeyAlgorithm } + : derivedKeyAlgorithm; + if (typeof normalizedAlgorithm.length === "number") { + return normalizedAlgorithm.length; + } + if (normalizedAlgorithm.name === "HMAC") { + const hashName = + typeof normalizedAlgorithm.hash === "string" + ? normalizedAlgorithm.hash + : String((normalizedAlgorithm.hash as { name?: string } | undefined)?.name ?? ""); + return normalizeHmacLength(hashName); + } + return fallbackBits; +} + /** * Build crypto bridge handlers. * @@ -110,6 +622,8 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { // create/update/final bridge calls (needed for ssh2 streaming AES-GCM). const cipherSessions = new Map(); let nextCipherSessionId = 1; + const diffieHellmanSessions = new Map(); + let nextDiffieHellmanSessionId = 1; // Secure randomness — cap matches Web Crypto API spec (65536 bytes). handlers[K.cryptoRandomFill] = (byteLength: unknown) => { @@ -183,14 +697,29 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { keyBase64: unknown, ivBase64: unknown, dataBase64: unknown, + optionsJson?: unknown, ) => { const key = Buffer.from(String(keyBase64), "base64"); - const iv = Buffer.from(String(ivBase64), "base64"); + const iv = ivBase64 === null ? null : Buffer.from(String(ivBase64), "base64"); const data = Buffer.from(String(dataBase64), "base64"); - const cipher = createCipheriv(String(algorithm), key, iv) as any; + const options = optionsJson ? JSON.parse(String(optionsJson)) : {}; + const cipher = createCipheriv(String(algorithm), key, iv, ( + options.authTagLength !== undefined + ? { authTagLength: options.authTagLength } + : undefined + ) as any) as any; + if (options.validateOnly) { + return JSON.stringify({ data: "" }); + } + if (options.aad) { + cipher.setAAD(Buffer.from(String(options.aad), "base64"), options.aadOptions); + } + if (options.autoPadding !== undefined) { + cipher.setAutoPadding(Boolean(options.autoPadding)); + } const encrypted = Buffer.concat([cipher.update(data), cipher.final()]); - const isGcm = String(algorithm).includes("-gcm"); - if (isGcm) { + const isAead = /-(gcm|ccm)$/i.test(String(algorithm)); + if (isAead) { return JSON.stringify({ data: encrypted.toString("base64"), authTag: cipher.getAuthTag().toString("base64"), @@ -209,14 +738,27 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { optionsJson: unknown, ) => { const key = Buffer.from(String(keyBase64), "base64"); - const iv = Buffer.from(String(ivBase64), "base64"); + const iv = ivBase64 === null ? null : Buffer.from(String(ivBase64), "base64"); const data = Buffer.from(String(dataBase64), "base64"); const options = JSON.parse(String(optionsJson)); - const decipher = createDecipheriv(String(algorithm), key, iv) as any; - const isGcm = String(algorithm).includes("-gcm"); - if (isGcm && options.authTag) { + const decipher = createDecipheriv(String(algorithm), key, iv, ( + options.authTagLength !== undefined + ? { authTagLength: options.authTagLength } + : undefined + ) as any) as any; + if (options.validateOnly) { + return ""; + } + const isAead = /-(gcm|ccm)$/i.test(String(algorithm)); + if (isAead && options.authTag) { decipher.setAuthTag(Buffer.from(options.authTag, "base64")); } + if (options.aad) { + decipher.setAAD(Buffer.from(String(options.aad), "base64"), options.aadOptions); + } + if (options.autoPadding !== undefined) { + decipher.setAutoPadding(Boolean(options.autoPadding)); + } return Buffer.concat([decipher.update(data), decipher.final()]).toString( "base64", ); @@ -233,19 +775,27 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { ) => { const algo = String(algorithm); const key = Buffer.from(String(keyBase64), "base64"); - const iv = Buffer.from(String(ivBase64), "base64"); + const iv = ivBase64 === null ? null : Buffer.from(String(ivBase64), "base64"); const options = optionsJson ? JSON.parse(String(optionsJson)) : {}; - const isGcm = algo.includes("-gcm"); + const isAead = /-(gcm|ccm)$/i.test(algo); let instance: Cipher | Decipher; if (String(mode) === "decipher") { - const d = createDecipheriv(algo, key, iv) as any; - if (isGcm && options.authTag) { + const d = createDecipheriv(algo, key, iv, ( + options.authTagLength !== undefined + ? { authTagLength: options.authTagLength } + : undefined + ) as any) as any; + if (isAead && options.authTag) { d.setAuthTag(Buffer.from(options.authTag, "base64")); } instance = d; } else { - instance = createCipheriv(algo, key, iv) as any; + instance = createCipheriv(algo, key, iv, ( + options.authTagLength !== undefined + ? { authTagLength: options.authTagLength } + : undefined + ) as any) as any; } const sessionId = nextCipherSessionId++; @@ -274,8 +824,8 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { if (!session) throw new Error(`Cipher session ${id} not found`); cipherSessions.delete(id); const final = session.cipher.final(); - const isGcm = session.algorithm.includes("-gcm"); - if (isGcm) { + const isAead = /-(gcm|ccm)$/i.test(session.algorithm); + if (isAead) { const authTag = (session.cipher as any).getAuthTag?.(); return JSON.stringify({ data: final.toString("base64"), @@ -289,11 +839,11 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { handlers[K.cryptoSign] = ( algorithm: unknown, dataBase64: unknown, - keyPem: unknown, + keyJson: unknown, ) => { const data = Buffer.from(String(dataBase64), "base64"); - const key = createPrivateKey(String(keyPem)); - const signature = sign(String(algorithm) || null, data, key); + const key = deserializeBridgeValue(JSON.parse(String(keyJson)) as SerializedBridgeValue) as any; + const signature = sign(normalizeBridgeAlgorithm(algorithm), data, key); return signature.toString("base64"); }; @@ -301,31 +851,198 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { handlers[K.cryptoVerify] = ( algorithm: unknown, dataBase64: unknown, - keyPem: unknown, + keyJson: unknown, signatureBase64: unknown, ) => { const data = Buffer.from(String(dataBase64), "base64"); - const key = createPublicKey(String(keyPem)); + const key = deserializeBridgeValue(JSON.parse(String(keyJson)) as SerializedBridgeValue) as any; const signature = Buffer.from(String(signatureBase64), "base64"); - return verify(String(algorithm) || null, data, key, signature); + return verify(normalizeBridgeAlgorithm(algorithm), data, key, signature); + }; + + // Asymmetric encrypt/decrypt — use real Node crypto so DER inputs, encrypted + // PEM options bags, and sandbox KeyObject handles all follow host semantics. + handlers[K.cryptoAsymmetricOp] = ( + operation: unknown, + keyJson: unknown, + dataBase64: unknown, + ) => { + const key = deserializeBridgeValue(JSON.parse(String(keyJson)) as SerializedBridgeValue) as any; + const data = Buffer.from(String(dataBase64), "base64"); + switch (String(operation)) { + case "publicEncrypt": + return publicEncrypt(key, data).toString("base64"); + case "privateDecrypt": + return privateDecrypt(key, data).toString("base64"); + case "privateEncrypt": + return privateEncrypt(key, data).toString("base64"); + case "publicDecrypt": + return publicDecrypt(key, data).toString("base64"); + default: + throw new Error(`Unsupported asymmetric crypto operation: ${String(operation)}`); + } + }; + + // createPublicKey/createPrivateKey — import through host crypto so metadata + // like asymmetricKeyType/asymmetricKeyDetails survives reconstruction. + handlers[K.cryptoCreateKeyObject] = ( + operation: unknown, + keyJson: unknown, + ) => { + const key = deserializeBridgeValue(JSON.parse(String(keyJson)) as SerializedBridgeValue) as any; + switch (String(operation)) { + case "createPrivateKey": + return JSON.stringify(serializeAnyKeyObject(createPrivateKey(key))); + case "createPublicKey": + return JSON.stringify(serializeAnyKeyObject(createPublicKey(key))); + default: + throw new Error(`Unsupported key creation operation: ${String(operation)}`); + } }; - // generateKeyPairSync — host generates key pair, returns PEM strings as JSON. + // generateKeyPairSync — host generates key pair, preserving requested encodings. + // For KeyObject output, serialize PEM + metadata so the isolate can recreate a + // Node-compatible KeyObject surface. handlers[K.cryptoGenerateKeyPairSync] = ( type: unknown, optionsJson: unknown, ) => { - const options = JSON.parse(String(optionsJson)); - const genOptions = { - ...options, - publicKeyEncoding: { type: "spki" as const, format: "pem" as const }, - privateKeyEncoding: { type: "pkcs8" as const, format: "pem" as const }, + const options = parseSerializedOptions(optionsJson); + const encodingOptions = options as + | { + publicKeyEncoding?: unknown; + privateKeyEncoding?: unknown; + } + | undefined; + const hasExplicitEncoding = + encodingOptions && + (encodingOptions.publicKeyEncoding || encodingOptions.privateKeyEncoding); + const { publicKey, privateKey } = generateKeyPairSync(type as any, options as any); + + if (hasExplicitEncoding) { + return JSON.stringify({ + publicKey: serializeKeyValue(publicKey as unknown), + privateKey: serializeKeyValue(privateKey as unknown), + }); + } + + return JSON.stringify({ + publicKey: serializeSandboxKeyObject(publicKey as any), + privateKey: serializeSandboxKeyObject(privateKey as any), + }); + }; + + // generateKeySync — host generates symmetric KeyObject values with native + // validation so length/error semantics match Node. + handlers[K.cryptoGenerateKeySync] = ( + type: unknown, + optionsJson: unknown, + ) => { + const options = parseSerializedOptions(optionsJson); + return JSON.stringify( + serializeAnyKeyObject(generateKeySync(type as any, options as any)), + ); + }; + + // generatePrimeSync — host generates prime material so bigint/add/rem options + // follow Node semantics instead of polyfill approximations. + handlers[K.cryptoGeneratePrimeSync] = ( + size: unknown, + optionsJson: unknown, + ) => { + const options = parseSerializedOptions(optionsJson); + const prime = + options === undefined + ? generatePrimeSync(size as any) + : generatePrimeSync(size as any, options as any); + return JSON.stringify(serializeBridgeValue(prime)); + }; + + // Diffie-Hellman/ECDH — keep native host objects alive by session id so + // sandbox calls preserve Node's return values, validation, and stateful key material. + handlers[K.cryptoDiffieHellman] = (optionsJson: unknown) => { + const options = deserializeBridgeValue( + JSON.parse(String(optionsJson)) as SerializedBridgeValue, + ) as Parameters[0]; + return JSON.stringify( + serializeBridgeValue(diffieHellman(options)), + ); + }; + + handlers[K.cryptoDiffieHellmanGroup] = (name: unknown) => { + const group = getDiffieHellman(String(name)); + return JSON.stringify({ + prime: serializeBridgeValue(group.getPrime()), + generator: serializeBridgeValue(group.getGenerator()), + }); + }; + + handlers[K.cryptoDiffieHellmanSessionCreate] = (requestJson: unknown) => { + const request = JSON.parse(String(requestJson)) as { + type: "dh" | "group" | "ecdh"; + name?: string; + args?: SerializedBridgeValue[]; + }; + const args = (request.args ?? []).map((value) => + deserializeBridgeValue(value), + ); + + let session: DiffieHellmanSession; + switch (request.type) { + case "dh": + session = createDiffieHellman(...(args as Parameters)); + break; + case "group": + session = getDiffieHellman(String(request.name)); + break; + case "ecdh": + session = createECDH(String(request.name)); + break; + default: + throw new Error(`Unsupported Diffie-Hellman session type: ${String((request as any).type)}`); + } + + const sessionId = nextDiffieHellmanSessionId++; + diffieHellmanSessions.set(sessionId, session); + return sessionId; + }; + + handlers[K.cryptoDiffieHellmanSessionCall] = ( + sessionId: unknown, + requestJson: unknown, + ) => { + const session = diffieHellmanSessions.get(Number(sessionId)); + if (!session) { + throw new Error(`Diffie-Hellman session ${String(sessionId)} not found`); + } + + const request = JSON.parse(String(requestJson)) as { + method: string; + args?: SerializedBridgeValue[]; }; - const { publicKey, privateKey } = generateKeyPairSync( - type as any, - genOptions as any, + const args = (request.args ?? []).map((value) => + deserializeBridgeValue(value), ); - return JSON.stringify({ publicKey, privateKey }); + + const sessionRecord = session as unknown as Record; + + if (request.method === "verifyError") { + return JSON.stringify({ + result: typeof sessionRecord.verifyError === "number" ? sessionRecord.verifyError : undefined, + hasResult: typeof sessionRecord.verifyError === "number", + }); + } + + const method = sessionRecord[request.method]; + if (typeof method !== "function") { + throw new Error(`Unsupported Diffie-Hellman method: ${request.method}`); + } + + const result = (method as (...callArgs: unknown[]) => unknown).apply(session, args); + return JSON.stringify({ + result: result === undefined ? null : serializeBridgeValue(result), + hasResult: result !== undefined, + }); }; // crypto.subtle — single dispatcher for all Web Crypto API operations. @@ -349,18 +1066,19 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { if ( algoName === "AES-GCM" || algoName === "AES-CBC" || - algoName === "AES-CTR" + algoName === "AES-CTR" || + algoName === "AES-KW" ) { const keyBytes = Buffer.allocUnsafe(req.algorithm.length / 8); randomFillSync(keyBytes); return JSON.stringify({ - key: { - type: "secret", - algorithm: req.algorithm, - extractable: req.extractable, - usages: req.usages, - _raw: keyBytes.toString("base64"), - }, + key: serializeCryptoKeyDataFromKeyObject( + createSecretKey(keyBytes), + "secret", + req.algorithm, + req.extractable, + req.usages, + ), }); } if (algoName === "HMAC") { @@ -368,25 +1086,21 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { typeof req.algorithm.hash === "string" ? req.algorithm.hash : req.algorithm.hash.name; - const hashLens: Record = { - "SHA-1": 20, - "SHA-256": 32, - "SHA-384": 48, - "SHA-512": 64, - }; - const len = req.algorithm.length - ? req.algorithm.length / 8 - : hashLens[hashName] || 32; + const len = normalizeHmacLength(hashName, req.algorithm.length) / 8; const keyBytes = Buffer.allocUnsafe(len); randomFillSync(keyBytes); return JSON.stringify({ - key: { - type: "secret", - algorithm: req.algorithm, - extractable: req.extractable, - usages: req.usages, - _raw: keyBytes.toString("base64"), - }, + key: serializeCryptoKeyDataFromKeyObject( + createSecretKey(keyBytes), + "secret", + { + ...req.algorithm, + hash: { name: hashName }, + length: len * 8, + }, + req.extractable, + req.usages, + ), }); } if ( @@ -417,25 +1131,93 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { format: "pem" as const, }, }); + const publicKeyObject = createPublicKey(publicKey); + const privateKeyObject = createPrivateKey(privateKey); return JSON.stringify({ - publicKey: { - type: "public", - algorithm: req.algorithm, - extractable: req.extractable, - usages: req.usages.filter((u: string) => + publicKey: serializeCryptoKeyDataFromKeyObject( + publicKeyObject, + "public", + req.algorithm, + req.extractable, + req.usages.filter((u: string) => ["verify", "encrypt", "wrapKey"].includes(u), ), - _pem: publicKey, - }, - privateKey: { - type: "private", - algorithm: req.algorithm, - extractable: req.extractable, - usages: req.usages.filter((u: string) => + ), + privateKey: serializeCryptoKeyDataFromKeyObject( + privateKeyObject, + "private", + req.algorithm, + req.extractable, + req.usages.filter((u: string) => ["sign", "decrypt", "unwrapKey"].includes(u), ), - _pem: privateKey, - }, + ), + }); + } + if (algoName === "ECDSA" || algoName === "ECDH") { + const { publicKey, privateKey } = generateKeyPairSync("ec", { + namedCurve: String(req.algorithm.namedCurve), + publicKeyEncoding: { type: "spki", format: "pem" }, + privateKeyEncoding: { type: "pkcs8", format: "pem" }, + }); + return JSON.stringify({ + publicKey: serializeCryptoKeyDataFromKeyObject( + createPublicKey(publicKey), + "public", + { ...req.algorithm, name: algoName }, + req.extractable, + req.usages.filter((u: string) => + algoName === "ECDSA" + ? ["verify"].includes(u) + : ["deriveBits", "deriveKey"].includes(u), + ), + ), + privateKey: serializeCryptoKeyDataFromKeyObject( + createPrivateKey(privateKey), + "private", + { ...req.algorithm, name: algoName }, + req.extractable, + req.usages.filter((u: string) => + algoName === "ECDSA" + ? ["sign"].includes(u) + : ["deriveBits", "deriveKey"].includes(u), + ), + ), + }); + } + if (["Ed25519", "Ed448", "X25519", "X448"].includes(algoName)) { + const keyPair = + algoName === "Ed25519" + ? generateKeyPairSync("ed25519") + : algoName === "Ed448" + ? generateKeyPairSync("ed448") + : algoName === "X25519" + ? generateKeyPairSync("x25519") + : generateKeyPairSync("x448"); + const { publicKey, privateKey } = keyPair; + return JSON.stringify({ + publicKey: serializeCryptoKeyDataFromKeyObject( + publicKey, + "public", + { name: algoName }, + req.extractable, + req.usages.filter((u: string) => + algoName.startsWith("Ed") + ? ["verify"].includes(u) + : ["deriveBits", "deriveKey"].includes(u), + ), + ), + privateKey: serializeCryptoKeyDataFromKeyObject( + privateKey, + "private", + { name: algoName }, + req.extractable, + req.usages.filter((u: string) => + algoName.startsWith("Ed") + ? ["sign"].includes(u) + : ["deriveBits", "deriveKey"].includes(u), + ), + ), }); } throw new Error(`Unsupported key algorithm: ${algoName}`); @@ -444,13 +1226,22 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { const { format, keyData, algorithm, extractable, usages } = req; if (format === "raw") { return JSON.stringify({ - key: { - type: "secret", - algorithm, + key: serializeCryptoKeyDataFromKeyObject( + createSecretKey(Buffer.from(keyData, "base64")), + "secret", + algorithm.name === "HMAC" && !algorithm.length + ? { + ...algorithm, + hash: + typeof algorithm.hash === "string" + ? { name: algorithm.hash } + : algorithm.hash, + length: Buffer.from(keyData, "base64").byteLength * 8, + } + : algorithm, extractable, usages, - _raw: keyData, - }, + ), }); } if (format === "jwk") { @@ -459,13 +1250,13 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { if (jwk.kty === "oct") { const raw = Buffer.from(jwk.k, "base64url"); return JSON.stringify({ - key: { - type: "secret", + key: serializeCryptoKeyDataFromKeyObject( + createSecretKey(raw), + "secret", algorithm, extractable, usages, - _raw: raw.toString("base64"), - }, + ), }); } if (jwk.d) { @@ -475,13 +1266,25 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { format: "pem", }) as string; return JSON.stringify({ - key: { type: "private", algorithm, extractable, usages, _pem: pem }, + key: serializeCryptoKeyDataFromKeyObject( + createPrivateKey(pem), + "private", + algorithm, + extractable, + usages, + ), }); } const keyObj = createPublicKey({ key: jwk, format: "jwk" }); const pem = keyObj.export({ type: "spki", format: "pem" }) as string; return JSON.stringify({ - key: { type: "public", algorithm, extractable, usages, _pem: pem }, + key: serializeCryptoKeyDataFromKeyObject( + createPublicKey(pem), + "public", + algorithm, + extractable, + usages, + ), }); } if (format === "pkcs8") { @@ -496,7 +1299,13 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { format: "pem", }) as string; return JSON.stringify({ - key: { type: "private", algorithm, extractable, usages, _pem: pem }, + key: serializeCryptoKeyDataFromKeyObject( + createPrivateKey(pem), + "private", + algorithm, + extractable, + usages, + ), }); } if (format === "spki") { @@ -508,7 +1317,13 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { }); const pem = keyObj.export({ type: "spki", format: "pem" }) as string; return JSON.stringify({ - key: { type: "public", algorithm, extractable, usages, _pem: pem }, + key: serializeCryptoKeyDataFromKeyObject( + createPublicKey(pem), + "public", + algorithm, + extractable, + usages, + ), }); } throw new Error(`Unsupported import format: ${format}`); @@ -655,12 +1470,12 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { throw new Error(`Unsupported decrypt algorithm: ${algoName}`); } case "sign": { - const { key, data } = req; + const { key, data, algorithm } = req; const dataBytes = Buffer.from(data, "base64"); const algoName = key.algorithm.name; if (algoName === "HMAC") { const rawKey = Buffer.from(key._raw, "base64"); - const hashAlgo = normalizeHash(key.algorithm.hash); + const hashAlgo = normalizeHash(algorithm.hash ?? key.algorithm.hash); return JSON.stringify({ data: createHmac(hashAlgo, rawKey) .update(dataBytes) @@ -674,16 +1489,44 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { data: sign(hashAlgo, dataBytes, pkey).toString("base64"), }); } + if (algoName === "RSA-PSS") { + const hashAlgo = normalizeHash(key.algorithm.hash); + return JSON.stringify({ + data: sign(hashAlgo, dataBytes, { + key: createPrivateKey(key._pem), + padding: cryptoConstants.RSA_PKCS1_PSS_PADDING, + saltLength: algorithm.saltLength, + }).toString("base64"), + }); + } + if (algoName === "ECDSA") { + const hashAlgo = normalizeHash(algorithm.hash ?? key.algorithm.hash); + return JSON.stringify({ + data: sign(hashAlgo, dataBytes, createPrivateKey(key._pem)).toString("base64"), + }); + } + if (algoName === "Ed25519" || algoName === "Ed448") { + if ( + algoName === "Ed448" && + algorithm.context && + Buffer.from(algorithm.context, "base64").byteLength > 0 + ) { + throw new Error("Non zero-length context is not yet supported"); + } + return JSON.stringify({ + data: sign(null, dataBytes, createPrivateKey(key._pem)).toString("base64"), + }); + } throw new Error(`Unsupported sign algorithm: ${algoName}`); } case "verify": { - const { key, signature, data } = req; + const { key, signature, data, algorithm } = req; const dataBytes = Buffer.from(data, "base64"); const sigBytes = Buffer.from(signature, "base64"); const algoName = key.algorithm.name; if (algoName === "HMAC") { const rawKey = Buffer.from(key._raw, "base64"); - const hashAlgo = normalizeHash(key.algorithm.hash); + const hashAlgo = normalizeHash(algorithm.hash ?? key.algorithm.hash); const expected = createHmac(hashAlgo, rawKey) .update(dataBytes) .digest(); @@ -700,14 +1543,42 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { result: verify(hashAlgo, dataBytes, pkey, sigBytes), }); } + if (algoName === "RSA-PSS") { + const hashAlgo = normalizeHash(key.algorithm.hash); + return JSON.stringify({ + result: verify(hashAlgo, dataBytes, { + key: createPublicKey(key._pem), + padding: cryptoConstants.RSA_PKCS1_PSS_PADDING, + saltLength: algorithm.saltLength, + }, sigBytes), + }); + } + if (algoName === "ECDSA") { + const hashAlgo = normalizeHash(algorithm.hash ?? key.algorithm.hash); + return JSON.stringify({ + result: verify(hashAlgo, dataBytes, createPublicKey(key._pem), sigBytes), + }); + } + if (algoName === "Ed25519" || algoName === "Ed448") { + if ( + algoName === "Ed448" && + algorithm.context && + Buffer.from(algorithm.context, "base64").byteLength > 0 + ) { + throw new Error("Non zero-length context is not yet supported"); + } + return JSON.stringify({ + result: verify(null, dataBytes, createPublicKey(key._pem), sigBytes), + }); + } throw new Error(`Unsupported verify algorithm: ${algoName}`); } case "deriveBits": { const { algorithm, baseKey, length } = req; const algoName = algorithm.name; - const bitLength = length; - const byteLength = bitLength / 8; if (algoName === "PBKDF2") { + const bitLength = Number(length); + const byteLength = bitLength / 8; const password = Buffer.from(baseKey._raw, "base64"); const salt = Buffer.from(algorithm.salt, "base64"); const hash = normalizeHash(algorithm.hash); @@ -721,6 +1592,8 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { return JSON.stringify({ data: derived.toString("base64") }); } if (algoName === "HKDF") { + const bitLength = Number(length); + const byteLength = bitLength / 8; const ikm = Buffer.from(baseKey._raw, "base64"); const salt = Buffer.from(algorithm.salt, "base64"); const info = Buffer.from(algorithm.info, "base64"); @@ -730,14 +1603,26 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { ); return JSON.stringify({ data: derived.toString("base64") }); } + if (algoName === "ECDH" || algoName === "X25519" || algoName === "X448") { + const secret = diffieHellman({ + privateKey: deserializeCryptoKeyObject(baseKey), + publicKey: deserializeCryptoKeyObject(algorithm.public), + }); + return JSON.stringify({ + data: sliceDerivedBits(secret, length).toString("base64"), + }); + } throw new Error(`Unsupported deriveBits algorithm: ${algoName}`); } case "deriveKey": { const { algorithm, baseKey, derivedKeyAlgorithm, extractable, usages } = req; const algoName = algorithm.name; - const keyLengthBits = derivedKeyAlgorithm.length; - const byteLength = keyLengthBits / 8; if (algoName === "PBKDF2") { + const keyLengthBits = resolveDerivedKeyLengthBits( + derivedKeyAlgorithm, + Buffer.from(baseKey._raw, "base64").byteLength * 8, + ); + const byteLength = keyLengthBits / 8; const password = Buffer.from(baseKey._raw, "base64"); const salt = Buffer.from(algorithm.salt, "base64"); const hash = normalizeHash(algorithm.hash); @@ -748,17 +1633,14 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { byteLength, hash, ); - return JSON.stringify({ - key: { - type: "secret", - algorithm: derivedKeyAlgorithm, - extractable, - usages, - _raw: derived.toString("base64"), - }, - }); + return JSON.stringify({ key: deriveSecretKeyData(derivedKeyAlgorithm, extractable, usages, derived) }); } if (algoName === "HKDF") { + const keyLengthBits = resolveDerivedKeyLengthBits( + derivedKeyAlgorithm, + Buffer.from(baseKey._raw, "base64").byteLength * 8, + ); + const byteLength = keyLengthBits / 8; const ikm = Buffer.from(baseKey._raw, "base64"); const salt = Buffer.from(algorithm.salt, "base64"); const info = Buffer.from(algorithm.info, "base64"); @@ -766,18 +1648,185 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { const derived = Buffer.from( hkdfSync(hash, ikm, salt, info, byteLength), ); + return JSON.stringify({ key: deriveSecretKeyData(derivedKeyAlgorithm, extractable, usages, derived) }); + } + if (algoName === "ECDH" || algoName === "X25519" || algoName === "X448") { + const secret = diffieHellman({ + privateKey: deserializeCryptoKeyObject(baseKey), + publicKey: deserializeCryptoKeyObject(algorithm.public), + }); return JSON.stringify({ - key: { - type: "secret", - algorithm: derivedKeyAlgorithm, - extractable, - usages, - _raw: derived.toString("base64"), - }, + key: deriveSecretKeyData(derivedKeyAlgorithm, extractable, usages, secret), }); } throw new Error(`Unsupported deriveKey algorithm: ${algoName}`); } + case "wrapKey": { + const { format, key, wrappingKey, wrapAlgorithm } = req; + const exported = JSON.parse( + handlers[K.cryptoSubtle]( + JSON.stringify({ + op: "exportKey", + format, + key, + }), + ) as string, + ) as { data?: string; jwk?: JsonWebKey }; + const keyData = + format === "jwk" + ? Buffer.from(JSON.stringify(exported.jwk), "utf8") + : decodeBridgeBuffer(exported.data); + if (wrapAlgorithm.name === "AES-KW") { + const wrappingBytes = decodeBridgeBuffer(wrappingKey._raw); + const cipherName = `id-aes${wrappingBytes.byteLength * 8}-wrap`; + const cipher = createCipheriv( + cipherName as never, + wrappingBytes, + Buffer.alloc(8, 0xa6), + ); + return JSON.stringify({ + data: Buffer.concat([cipher.update(keyData), cipher.final()]).toString("base64"), + }); + } + if (wrapAlgorithm.name === "RSA-OAEP") { + return JSON.stringify({ + data: publicEncrypt( + { + key: createPublicKey(wrappingKey._pem), + oaepHash: normalizeHash(wrappingKey.algorithm.hash), + oaepLabel: wrapAlgorithm.label + ? decodeBridgeBuffer(wrapAlgorithm.label) + : undefined, + }, + keyData, + ).toString("base64"), + }); + } + if ( + wrapAlgorithm.name === "AES-CTR" || + wrapAlgorithm.name === "AES-CBC" || + wrapAlgorithm.name === "AES-GCM" + ) { + const wrappingBytes = decodeBridgeBuffer(wrappingKey._raw); + const algorithmName = + wrapAlgorithm.name === "AES-CTR" + ? `aes-${wrappingBytes.byteLength * 8}-ctr` + : wrapAlgorithm.name === "AES-CBC" + ? `aes-${wrappingBytes.byteLength * 8}-cbc` + : `aes-${wrappingBytes.byteLength * 8}-gcm`; + const iv = + wrapAlgorithm.name === "AES-CTR" + ? decodeBridgeBuffer(wrapAlgorithm.counter) + : decodeBridgeBuffer(wrapAlgorithm.iv); + const cipher = createCipheriv( + algorithmName as never, + wrappingBytes, + iv, + wrapAlgorithm.name === "AES-GCM" + ? ({ authTagLength: (wrapAlgorithm.tagLength || 128) / 8 } as never) + : undefined, + ) as Cipher & { setAAD?: (aad: Buffer) => void; getAuthTag?: () => Buffer }; + if (wrapAlgorithm.name === "AES-GCM" && wrapAlgorithm.additionalData) { + cipher.setAAD?.(decodeBridgeBuffer(wrapAlgorithm.additionalData)); + } + const encrypted = Buffer.concat([cipher.update(keyData), cipher.final()]); + const payload = + wrapAlgorithm.name === "AES-GCM" + ? Buffer.concat([encrypted, cipher.getAuthTag?.() ?? Buffer.alloc(0)]) + : encrypted; + return JSON.stringify({ data: payload.toString("base64") }); + } + throw new Error(`Unsupported wrap algorithm: ${wrapAlgorithm.name}`); + } + case "unwrapKey": { + const { + format, + wrappedKey, + unwrappingKey, + unwrapAlgorithm, + unwrappedKeyAlgorithm, + extractable, + usages, + } = req; + let unwrapped: Buffer; + if (unwrapAlgorithm.name === "AES-KW") { + const unwrappingBytes = decodeBridgeBuffer(unwrappingKey._raw); + const cipherName = `id-aes${unwrappingBytes.byteLength * 8}-wrap`; + const decipher = createDecipheriv( + cipherName as never, + unwrappingBytes, + Buffer.alloc(8, 0xa6), + ); + unwrapped = Buffer.concat([ + decipher.update(decodeBridgeBuffer(wrappedKey)), + decipher.final(), + ]); + } else if (unwrapAlgorithm.name === "RSA-OAEP") { + unwrapped = privateDecrypt( + { + key: createPrivateKey(unwrappingKey._pem), + oaepHash: normalizeHash(unwrappingKey.algorithm.hash), + oaepLabel: unwrapAlgorithm.label + ? decodeBridgeBuffer(unwrapAlgorithm.label) + : undefined, + }, + decodeBridgeBuffer(wrappedKey), + ); + } else if ( + unwrapAlgorithm.name === "AES-CTR" || + unwrapAlgorithm.name === "AES-CBC" || + unwrapAlgorithm.name === "AES-GCM" + ) { + const unwrappingBytes = decodeBridgeBuffer(unwrappingKey._raw); + const algorithmName = + unwrapAlgorithm.name === "AES-CTR" + ? `aes-${unwrappingBytes.byteLength * 8}-ctr` + : unwrapAlgorithm.name === "AES-CBC" + ? `aes-${unwrappingBytes.byteLength * 8}-cbc` + : `aes-${unwrappingBytes.byteLength * 8}-gcm`; + const iv = + unwrapAlgorithm.name === "AES-CTR" + ? decodeBridgeBuffer(unwrapAlgorithm.counter) + : decodeBridgeBuffer(unwrapAlgorithm.iv); + const wrappedBytes = decodeBridgeBuffer(wrappedKey); + const decipher = createDecipheriv( + algorithmName as never, + unwrappingBytes, + iv, + unwrapAlgorithm.name === "AES-GCM" + ? ({ authTagLength: (unwrapAlgorithm.tagLength || 128) / 8 } as never) + : undefined, + ) as Decipher & { + setAAD?: (aad: Buffer) => void; + setAuthTag?: (tag: Buffer) => void; + }; + let ciphertext = wrappedBytes; + if (unwrapAlgorithm.name === "AES-GCM") { + const tagLength = (unwrapAlgorithm.tagLength || 128) / 8; + ciphertext = wrappedBytes.subarray(0, wrappedBytes.byteLength - tagLength); + decipher.setAuthTag?.(wrappedBytes.subarray(wrappedBytes.byteLength - tagLength)); + if (unwrapAlgorithm.additionalData) { + decipher.setAAD?.(decodeBridgeBuffer(unwrapAlgorithm.additionalData)); + } + } + unwrapped = Buffer.concat([decipher.update(ciphertext), decipher.final()]); + } else { + throw new Error(`Unsupported unwrap algorithm: ${unwrapAlgorithm.name}`); + } + return handlers[K.cryptoSubtle]( + JSON.stringify({ + op: "importKey", + format, + keyData: + format === "jwk" + ? JSON.parse(unwrapped.toString("utf8")) + : unwrapped.toString("base64"), + algorithm: unwrappedKeyAlgorithm, + extractable, + usages, + }), + ); + } default: throw new Error(`Unsupported subtle operation: ${req.op}`); } @@ -785,6 +1834,7 @@ export function buildCryptoBridgeHandlers(): CryptoBridgeResult { const dispose = () => { cipherSessions.clear(); + diffieHellmanSessions.clear(); }; return { handlers, dispose }; @@ -833,9 +1883,6 @@ function buildKernelSocketBridgeHandlers( socketTable: import("@secure-exec/core").SocketTable, pid: number, ): NetSocketBridgeResult { - const { - AF_INET, SOCK_STREAM, - } = require("@secure-exec/core") as typeof import("@secure-exec/core"); const handlers: BridgeHandlers = {}; const K = HOST_BRIDGE_GLOBAL_KEYS; @@ -1411,11 +2458,11 @@ export function buildModuleLoadingBridgeHandlers( const handler = dispatchHandlers[method]; if (!handler) return JSON.stringify({ __bd_error: `No handler: ${method}` }); try { - const args = JSON.parse(argsJson); + const args = restoreDispatchArgument(JSON.parse(argsJson)); const result = await handler(...(Array.isArray(args) ? args : [args])); return JSON.stringify({ __bd_result: result }); } catch (err) { - return JSON.stringify({ __bd_error: err instanceof Error ? err.message : String(err) }); + return JSON.stringify({ __bd_error: serializeDispatchError(err) }); } } @@ -2317,10 +3364,6 @@ export function buildNetworkBridgeHandlers(deps: NetworkBridgeDeps): NetworkBrid return (async () => { try { - const { - AF_INET, SOCK_STREAM, - } = require("@secure-exec/core") as typeof import("@secure-exec/core"); - const host = normalizeLoopbackHostname(options.hostname); debugHttpBridge("listen start", options.serverId, host, options.port ?? 0); const listenSocketId = socketTable.create(AF_INET, SOCK_STREAM, 0, pid); diff --git a/packages/nodejs/src/bridge/dispatch.ts b/packages/nodejs/src/bridge/dispatch.ts index f8af7f48..c827d452 100644 --- a/packages/nodejs/src/bridge/dispatch.ts +++ b/packages/nodejs/src/bridge/dispatch.ts @@ -6,8 +6,14 @@ type DispatchBridgeRef = LoadPolyfillBridgeRef & { declare const _loadPolyfill: DispatchBridgeRef | undefined; +function encodeDispatchArgs(args: unknown[]): string { + return JSON.stringify(args, (_key, value) => + value === undefined ? { __secureExecDispatchType: "undefined" } : value, + ); +} + function encodeDispatch(method: string, args: unknown[]): string { - return `__bd:${method}:${JSON.stringify(args)}`; + return `__bd:${method}:${encodeDispatchArgs(args)}`; } function parseDispatchResult(resultJson: string | null): T { @@ -16,11 +22,24 @@ function parseDispatchResult(resultJson: string | null): T { } const parsed = JSON.parse(resultJson) as { - __bd_error?: string; + __bd_error?: { + message: string; + name?: string; + code?: string; + stack?: string; + }; __bd_result?: T; }; if (parsed.__bd_error) { - throw new Error(parsed.__bd_error); + const error = new Error(parsed.__bd_error.message); + error.name = parsed.__bd_error.name ?? "Error"; + if (parsed.__bd_error.code !== undefined) { + (error as Error & { code?: string }).code = parsed.__bd_error.code; + } + if (parsed.__bd_error.stack) { + error.stack = parsed.__bd_error.stack; + } + throw error; } return parsed.__bd_result as T; } diff --git a/packages/nodejs/src/bridge/network.ts b/packages/nodejs/src/bridge/network.ts index 59b7ceba..40667f8e 100644 --- a/packages/nodejs/src/bridge/network.ts +++ b/packages/nodejs/src/bridge/network.ts @@ -427,7 +427,7 @@ type EventListener = (...args: unknown[]) => void; // Module-level globalAgent used by ClientRequest when no agent option is provided. // Initialized lazily after Agent class is defined; set by createHttpModule(). -let _moduleGlobalAgent: { _acquireSlot(key: string): Promise; _releaseSlot(key: string): void; _getHostKey(options: { hostname?: string; host?: string; port?: string | number }): string } | null = null; +let _moduleGlobalAgent: Agent | null = null; /** * Polyfill of Node.js `http.IncomingMessage` (client-side response). Buffers @@ -452,7 +452,7 @@ export class IncomingMessage { private _listeners: Record; complete: boolean; aborted: boolean; - socket: null; + socket: FakeSocket | UpgradeSocket | null; private _bodyConsumed: boolean; private _ended: boolean; private _flowing: boolean; @@ -462,7 +462,7 @@ export class IncomingMessage { destroyed: boolean; private _encoding?: string; - constructor(response?: { headers?: Record; url?: string; status?: number; statusText?: string; body?: string; trailers?: Record }) { + constructor(response?: { headers?: Record; url?: string; status?: number; statusText?: string; body?: string; trailers?: Record; bodyEncoding?: "utf8" | "base64" }) { this.headers = response?.headers || {}; this.rawHeaders = []; if (this.headers && typeof this.headers === "object") { @@ -489,7 +489,7 @@ export class IncomingMessage { this.statusCode = response?.status; this.statusMessage = response?.statusText; // Decode base64 body if x-body-encoding header is set - const bodyEncoding = this.headers['x-body-encoding']; + const bodyEncoding = response?.bodyEncoding || this.headers['x-body-encoding']; if (bodyEncoding === 'base64' && response?.body && typeof Buffer !== 'undefined') { this._body = Buffer.from(response.body, 'base64').toString('binary'); this._isBinary = true; @@ -756,11 +756,13 @@ export class ClientRequest { private _body = ""; private _bodyBytes = 0; private _ended = false; - private _agent: { _acquireSlot(key: string): Promise; _releaseSlot(key: string): void; _getHostKey(options: { hostname?: string; host?: string; port?: string | number }): string } | null; + private _agent: Agent | null; private _hostKey: string; - socket: FakeSocket; + private _socketEndListener: EventListener | null = null; + socket!: FakeSocket; finished = false; aborted = false; + reusedSocket = false; constructor(options: nodeHttp.RequestOptions, callback?: (res: IncomingMessage) => void) { this._options = options; @@ -777,23 +779,47 @@ export class ClientRequest { } this._hostKey = this._agent ? this._agent._getHostKey(options as { hostname?: string; host?: string; port?: string | number }) : ""; - // Create socket-like object and emit 'socket' event - this.socket = new FakeSocket({ - host: (options.hostname || options.host || "localhost") as string, - port: Number(options.port) || 80, - }); - Promise.resolve().then(() => this._emit("socket", this.socket)); - // Execute request asynchronously Promise.resolve().then(() => this._execute()); } - private async _execute(): Promise { - // Acquire agent slot before executing + _assignSocket(socket: FakeSocket, reusedSocket: boolean): void { + this.socket = socket; + this.reusedSocket = reusedSocket; + const trackedSocket = socket as FakeSocket & { + _agentPermanentListenersInstalled?: boolean; + }; + if (!trackedSocket._agentPermanentListenersInstalled) { + trackedSocket._agentPermanentListenersInstalled = true; + socket.on("error", () => {}); + socket.on("end", () => {}); + } + this._socketEndListener = () => {}; + socket.on("end", this._socketEndListener); + this._emit("socket", socket); + void this._dispatchWithSocket(socket); + } + + _handleSocketError(err: Error): void { + this._emit("error", err); + } + + private _finalizeSocket( + socket: FakeSocket, + keepSocketAlive: boolean, + ): void { + if (this._socketEndListener) { + socket.off("end", this._socketEndListener); + this._socketEndListener = null; + } if (this._agent) { - await this._agent._acquireSlot(this._hostKey); + this._agent._releaseSocket(this._hostKey, socket, this._options, keepSocketAlive); + } else if (!socket.destroyed) { + socket.destroy(); } + } + private async _dispatchWithSocket(socket: FakeSocket): Promise { try { if (typeof _networkHttpRequestRaw === 'undefined') { console.error('http/https request requires NetworkAdapter to be configured'); @@ -835,8 +861,11 @@ export class ClientRequest { status?: number; statusText?: string; body?: string; + bodyEncoding?: "utf8" | "base64"; trailers?: Record; upgradeSocketId?: number; + connectionEnded?: boolean; + connectionReset?: boolean; }; this.finished = true; @@ -845,22 +874,37 @@ export class ClientRequest { if (response.status === 101) { const res = new IncomingMessage(response); // Use UpgradeSocket for bidirectional data relay when socketId is available - let socket: FakeSocket | UpgradeSocket = this.socket; + let upgradeSocket: FakeSocket | UpgradeSocket = socket; if (response.upgradeSocketId != null) { - socket = new UpgradeSocket(response.upgradeSocketId, { + upgradeSocket = new UpgradeSocket(response.upgradeSocketId, { host: this._options.hostname as string, port: Number(this._options.port) || 80, }); - upgradeSocketInstances.set(response.upgradeSocketId, socket); + upgradeSocketInstances.set(response.upgradeSocketId, upgradeSocket); } const head = typeof Buffer !== "undefined" ? (response.body ? Buffer.from(response.body, "base64") : Buffer.alloc(0)) : new Uint8Array(0); - this._emit("upgrade", res, socket, head); + res.socket = upgradeSocket; + this._emit("upgrade", res, upgradeSocket, head); + return; + } + + if (response.connectionReset) { + const error = new Error("socket hang up"); + this._emit("error", error); + setTimeout(() => socket.destroy(), 0); return; } const res = new IncomingMessage(response); + res.socket = socket; + res.once("end", () => { + this._finalizeSocket(socket, this._agent?.keepAlive === true && !this.aborted); + if (response.connectionEnded) { + setTimeout(() => socket.end(), 0); + } + }); if (this._callback) { this._callback(res); @@ -868,14 +912,22 @@ export class ClientRequest { this._emit("response", res); } catch (err) { this._emit("error", err); - } finally { - // Release agent slot - if (this._agent) { - this._agent._releaseSlot(this._hostKey); - } + this._finalizeSocket(socket, false); } } + private _execute(): void { + if (this._agent) { + this._agent.addRequest(this, this._options); + return; + } + const socket = new FakeSocket({ + host: (this._options.hostname || this._options.host || "localhost") as string, + port: Number(this._options.port) || 80, + }); + this._assignSocket(socket, false); + } + private _buildUrl(): string { const opts = this._options; const protocol = opts.protocol || (opts.port === 443 ? "https:" : "http:"); @@ -896,12 +948,25 @@ export class ClientRequest { this.off(event, wrapper); listener(...args); }; + ( + wrapper as EventListener & { + listener?: EventListener; + } + ).listener = listener; return this.on(event, wrapper); } off(event: string, listener: EventListener): this { if (this._listeners[event]) { - const idx = this._listeners[event].indexOf(listener); + const idx = this._listeners[event].findIndex( + (registered) => + registered === listener || + ( + registered as EventListener & { + listener?: EventListener; + } + ).listener === listener, + ); if (idx !== -1) this._listeners[event].splice(idx, 1); } return this; @@ -931,6 +996,9 @@ export class ClientRequest { abort(): void { this.aborted = true; + if (this.socket && !this.socket.destroyed) { + this.socket.destroy(); + } } setTimeout(_timeout: number): this { @@ -961,6 +1029,9 @@ class FakeSocket { writable = true; readable = true; private _listeners: Record = {}; + private _closed = false; + private _closeScheduled = false; + _freeTimer: ReturnType | null = null; constructor(options?: { host?: string; port?: number }) { this.remoteAddress = options?.host || "127.0.0.1"; @@ -997,99 +1068,409 @@ class FakeSocket { return this.off(event, listener); } + removeAllListeners(event?: string): this { + if (event) { + delete this._listeners[event]; + } else { + this._listeners = {}; + } + return this; + } + emit(event: string, ...args: unknown[]): boolean { const handlers = this._listeners[event]; if (handlers) handlers.slice().forEach((fn) => fn(...args)); return handlers !== undefined && handlers.length > 0; } + listenerCount(event: string): number { + return this._listeners[event]?.length || 0; + } + write(_data: unknown): boolean { return true; } - end(): this { return this; } + end(): this { + if (this.destroyed || this._closed) return this; + this.writable = false; + queueMicrotask(() => { + if (this.destroyed || this._closed) return; + this.readable = false; + this.emit("end"); + this.destroy(); + }); + return this; + } destroy(): this { + if (this.destroyed || this._closed) return this; this.destroyed = true; + this._closed = true; this.writable = false; this.readable = false; + if (!this._closeScheduled) { + this._closeScheduled = true; + queueMicrotask(() => { + this._closeScheduled = false; + this.emit("close"); + }); + } return this; } } -// HTTP Agent with connection pooling via maxSockets +type QueuedAgentRequest = { + request: ClientRequest; + options: nodeHttp.RequestOptions; +}; + +// HTTP Agent with connection pooling via maxSockets/maxTotalSockets class Agent { + static defaultMaxSockets = Infinity; + maxSockets: number; + maxTotalSockets: number; maxFreeSockets: number; keepAlive: boolean; keepAliveMsecs: number; timeout: number; - requests: Record; - sockets: Record; - freeSockets: Record; - - // Per-host active count and pending queue - private _activeCounts = new Map(); - private _queues = new Map void>>(); + requests: Record; + sockets: Record; + freeSockets: Record; + totalSocketCount: number; + private _listeners: Record = {}; constructor(options?: { keepAlive?: boolean; keepAliveMsecs?: number; maxSockets?: number; + maxTotalSockets?: number; maxFreeSockets?: number; timeout?: number; }) { + this._validateSocketCountOption("maxSockets", options?.maxSockets); + this._validateSocketCountOption("maxFreeSockets", options?.maxFreeSockets); + this._validateSocketCountOption("maxTotalSockets", options?.maxTotalSockets); this.keepAlive = options?.keepAlive ?? false; this.keepAliveMsecs = options?.keepAliveMsecs ?? 1000; - this.maxSockets = options?.maxSockets ?? Infinity; + this.maxSockets = options?.maxSockets ?? Agent.defaultMaxSockets; + this.maxTotalSockets = options?.maxTotalSockets ?? Infinity; this.maxFreeSockets = options?.maxFreeSockets ?? 256; this.timeout = options?.timeout ?? -1; this.requests = {}; this.sockets = {}; this.freeSockets = {}; + this.totalSocketCount = 0; } - _getHostKey(options: { hostname?: string; host?: string; port?: string | number }): string { - const host = options.hostname || options.host || "localhost"; - const port = options.port || 80; - return `${host}:${port}`; + private _validateSocketCountOption( + name: "maxSockets" | "maxFreeSockets" | "maxTotalSockets", + value: number | undefined, + ): void { + if (value === undefined) return; + if (typeof value !== "number") { + const received = + typeof value === "string" + ? `type string ('${value}')` + : `type ${typeof value} (${JSON.stringify(value)})`; + const err = new TypeError( + `The "${name}" argument must be of type number. Received ${received}`, + ) as TypeError & { code?: string }; + err.code = "ERR_INVALID_ARG_TYPE"; + throw err; + } + if (Number.isNaN(value) || value <= 0) { + const err = new RangeError( + `The value of "${name}" is out of range. It must be > 0. Received ${String(value)}`, + ) as RangeError & { code?: string }; + err.code = "ERR_OUT_OF_RANGE"; + throw err; + } + } + + getName(options?: { + hostname?: string | null; + host?: string | null; + port?: string | number | null; + localAddress?: string | null; + family?: string | number | null; + socketPath?: string | null; + }): string { + const host = options?.hostname || options?.host || "localhost"; + const port = options?.port ?? ""; + const localAddress = options?.localAddress ?? ""; + let suffix = ""; + if (options?.socketPath) { + suffix = `:${options.socketPath}`; + } else if (options?.family === 4 || options?.family === 6) { + suffix = `:${options.family}`; + } + return `${host}:${port}:${localAddress}${suffix}`; + } + + _getHostKey(options: { + hostname?: string | null; + host?: string | null; + port?: string | number | null; + localAddress?: string | null; + family?: string | number | null; + socketPath?: string | null; + }): string { + return this.getName(options); } - // Wait for an available slot; resolves immediately if under maxSockets - _acquireSlot(hostKey: string): Promise { - const active = this._activeCounts.get(hostKey) || 0; - if (active < this.maxSockets) { - this._activeCounts.set(hostKey, active + 1); - return Promise.resolve(); + on(event: string, listener: EventListener): this { + if (!this._listeners[event]) this._listeners[event] = []; + this._listeners[event].push(listener); + return this; + } + + once(event: string, listener: EventListener): this { + const wrapper = (...args: unknown[]): void => { + this.off(event, wrapper); + listener(...args); + }; + return this.on(event, wrapper); + } + + off(event: string, listener: EventListener): this { + const listeners = this._listeners[event]; + if (!listeners) return this; + const index = listeners.indexOf(listener); + if (index !== -1) listeners.splice(index, 1); + return this; + } + + removeListener(event: string, listener: EventListener): this { + return this.off(event, listener); + } + + emit(event: string, ...args: unknown[]): boolean { + const listeners = this._listeners[event]; + if (!listeners || listeners.length === 0) return false; + listeners.slice().forEach((listener) => listener(...args)); + return true; + } + + createConnection( + options: nodeHttp.RequestOptions & { + keepAlive?: boolean; + keepAliveInitialDelay?: number; + }, + cb?: (err: Error | null, socket?: FakeSocket) => void, + ): FakeSocket { + const socket = new FakeSocket({ + host: String(options.hostname || options.host || "localhost"), + port: Number(options.port) || 80, + }); + if (cb) { + Promise.resolve().then(() => cb(null, socket)); + } + return socket; + } + + addRequest(request: ClientRequest, options: nodeHttp.RequestOptions): void { + const name = this.getName(options); + const freeSocket = this._takeFreeSocket(name); + if (freeSocket) { + this._activateSocket(name, freeSocket); + request._assignSocket(freeSocket, true); + return; + } + + if (this._canCreateSocket(name)) { + this._createSocketForRequest(name, request, options); + return; + } + + if (!this.requests[name]) { + this.requests[name] = []; } - return new Promise((resolve) => { - let queue = this._queues.get(hostKey); - if (!queue) { - queue = []; - this._queues.set(hostKey, queue); + this.requests[name].push({ request, options }); + } + + _releaseSocket( + name: string, + socket: FakeSocket, + options: nodeHttp.RequestOptions, + keepSocketAlive: boolean, + ): void { + this._removeSocket(this.sockets, name, socket); + if (keepSocketAlive && !socket.destroyed) { + const freeList = this.freeSockets[name] ?? (this.freeSockets[name] = []); + if (freeList.length < this.maxFreeSockets) { + if (socket._freeTimer) { + clearTimeout(socket._freeTimer); + socket._freeTimer = null; + } + freeList.push(socket); + if (this.timeout > 0) { + socket._freeTimer = setTimeout(() => { + socket._freeTimer = null; + socket.destroy(); + }, this.timeout); + } + socket.emit("free"); + this.emit("free", socket, options); + } else { + socket.destroy(); } - queue.push(resolve); - }); + } else if (!socket.destroyed) { + socket.destroy(); + } + Promise.resolve().then(() => this._processPendingRequests()); } - // Release a slot; dequeues next pending request if any - _releaseSlot(hostKey: string): void { - const queue = this._queues.get(hostKey); - if (queue && queue.length > 0) { - const next = queue.shift()!; - if (queue.length === 0) this._queues.delete(hostKey); - next(); - } else { - const active = this._activeCounts.get(hostKey) || 1; - const next = active - 1; - if (next <= 0) this._activeCounts.delete(hostKey); - else this._activeCounts.set(hostKey, next); + _removeSocketCompletely(name: string, socket: FakeSocket): void { + if (socket._freeTimer) { + clearTimeout(socket._freeTimer); + socket._freeTimer = null; + } + const removed = + this._removeSocket(this.sockets, name, socket) || + this._removeSocket(this.freeSockets, name, socket); + if (removed) { + this.totalSocketCount = Math.max(0, this.totalSocketCount - 1); + Promise.resolve().then(() => this._processPendingRequests()); + } + } + + private _canCreateSocket(name: string): boolean { + const activeCount = this.sockets[name]?.length ?? 0; + if (activeCount >= this.maxSockets) { + return false; + } + if (this.totalSocketCount < this.maxTotalSockets) { + return true; + } + this._evictFreeSocket(name); + return this.totalSocketCount < this.maxTotalSockets; + } + + private _takeFreeSocket(name: string): FakeSocket | null { + const freeList = this.freeSockets[name]; + while (freeList && freeList.length > 0) { + const socket = freeList.shift()!; + if (!socket.destroyed) { + if (socket._freeTimer) { + clearTimeout(socket._freeTimer); + socket._freeTimer = null; + } + if (freeList.length === 0) delete this.freeSockets[name]; + return socket; + } + this.totalSocketCount = Math.max(0, this.totalSocketCount - 1); + } + if (freeList && freeList.length === 0) { + delete this.freeSockets[name]; + } + return null; + } + + private _activateSocket(name: string, socket: FakeSocket): void { + const activeList = this.sockets[name] ?? (this.sockets[name] = []); + activeList.push(socket); + } + + private _createSocketForRequest( + name: string, + request: ClientRequest, + options: nodeHttp.RequestOptions, + ): void { + let settled = false; + const finish = (err: Error | null, socket?: FakeSocket): void => { + if (settled) return; + settled = true; + if (err || !socket) { + request._handleSocketError(err ?? new Error("Failed to create socket")); + this._processPendingRequests(); + return; + } + this.totalSocketCount += 1; + this._activateSocket(name, socket); + socket.once("close", () => { + this._removeSocketCompletely(name, socket); + }); + request._assignSocket(socket, false); + }; + + const connectionOptions = { + ...options, + keepAlive: this.keepAlive, + keepAliveInitialDelay: this.keepAliveMsecs, + }; + + try { + const maybeSocket = this.createConnection(connectionOptions, (err, socket) => { + finish(err, socket); + }); + if (maybeSocket) { + finish(null, maybeSocket); + } + } catch (err) { + finish(err instanceof Error ? err : new Error(String(err))); + } + } + + private _processPendingRequests(): void { + for (const name of Object.keys(this.requests)) { + const queue = this.requests[name]; + while (queue && queue.length > 0) { + const freeSocket = this._takeFreeSocket(name); + if (freeSocket) { + const entry = queue.shift()!; + this._activateSocket(name, freeSocket); + entry.request._assignSocket(freeSocket, true); + continue; + } + if (!this._canCreateSocket(name)) { + break; + } + const entry = queue.shift()!; + this._createSocketForRequest(name, entry.request, entry.options); + } + if (!queue || queue.length === 0) { + delete this.requests[name]; + } + } + } + + private _removeSocket( + sockets: Record, + name: string, + socket: FakeSocket, + ): boolean { + const list = sockets[name]; + if (!list) return false; + const index = list.indexOf(socket); + if (index === -1) return false; + list.splice(index, 1); + if (list.length === 0) delete sockets[name]; + return true; + } + + private _evictFreeSocket(preferredName: string): void { + const keys = Object.keys(this.freeSockets); + const orderedKeys = keys.includes(preferredName) + ? [...keys.filter((key) => key !== preferredName), preferredName] + : keys; + for (const key of orderedKeys) { + const socket = this.freeSockets[key]?.[0]; + if (!socket) continue; + socket.destroy(); + return; } } destroy(): void { - this._activeCounts.clear(); - for (const [, queue] of this._queues) { - queue.length = 0; + for (const socket of Object.values(this.sockets).flat()) { + socket.destroy(); + } + for (const socket of Object.values(this.freeSockets).flat()) { + socket.destroy(); } - this._queues.clear(); + this.requests = {}; + this.sockets = {}; + this.freeSockets = {}; + this.totalSocketCount = 0; } } @@ -1116,6 +1497,8 @@ interface SerializedServerResponse { headers?: Array<[string, string]>; body?: string; bodyEncoding?: "utf8" | "base64"; + connectionEnded?: boolean; + connectionReset?: boolean; } function debugBridgeNetwork(...args: unknown[]): void { @@ -1318,6 +1701,8 @@ class ServerResponseBridge { private _listeners: Record = {}; private _closedPromise: Promise; private _resolveClosed: (() => void) | null = null; + private _connectionEnded = false; + private _connectionReset = false; constructor() { this._closedPromise = new Promise((resolve) => { @@ -1437,8 +1822,13 @@ class ServerResponseBridge { on: () => this.socket, once: () => this.socket, removeListener: () => this.socket, - destroy: () => {}, - end: () => {}, + destroy: () => { + this._connectionReset = true; + this._finalize(); + }, + end: () => { + this._connectionEnded = true; + }, cork: () => {}, uncork: () => {}, write: () => true, @@ -1460,6 +1850,7 @@ class ServerResponseBridge { } destroy(err?: Error): void { + this._connectionReset = true; if (err) { this._emit("error", err); } @@ -1478,6 +1869,8 @@ class ServerResponseBridge { headers: Array.from(this._headers.entries()), body: bodyBuffer.toString("base64"), bodyEncoding: "base64", + connectionEnded: this._connectionEnded, + connectionReset: this._connectionReset, }; } @@ -1743,6 +2136,14 @@ class Server { } } +// Function-style Server constructor for code that calls http.Server(...) +// without `new`, matching the callable shape Node exposes. +// eslint-disable-next-line @typescript-eslint/no-explicit-any +function ServerCallable(this: any, requestListener?: (req: ServerIncomingMessage, res: ServerResponseBridge) => unknown): Server { + return new Server(requestListener); +} +ServerCallable.prototype = Server.prototype; + /** Route an incoming HTTP request to the server's request listener and return the serialized response. */ async function dispatchServerRequest( serverId: number, @@ -1786,6 +2187,9 @@ async function dispatchServerRequest( } await outgoing.waitForClose(); + // Let same-turn deferred socket teardown (e.g. setImmediate(() => res.connection.end())) + // update the serialized connection flags before the client receives the response. + await new Promise((resolve) => setTimeout(resolve, 0)); return JSON.stringify(outgoing.serialize()); } finally { server._endRequestDispatch(); @@ -2107,7 +2511,7 @@ function createHttpModule(protocol: string): Record { Agent, globalAgent: moduleAgent, - Server: Server as unknown as typeof nodeHttp.Server, + Server: ServerCallable as unknown as typeof nodeHttp.Server, ServerResponse: ServerResponseCallable as unknown as typeof nodeHttp.ServerResponse, IncomingMessage: IncomingMessage as unknown as typeof nodeHttp.IncomingMessage, ClientRequest: ClientRequest as unknown as typeof nodeHttp.ClientRequest, diff --git a/packages/nodejs/src/bridge/process.ts b/packages/nodejs/src/bridge/process.ts index 5a7b76a5..485d9bb3 100644 --- a/packages/nodejs/src/bridge/process.ts +++ b/packages/nodejs/src/bridge/process.ts @@ -14,6 +14,7 @@ import { Buffer as BufferPolyfill } from "buffer"; import type { CryptoRandomFillBridgeRef, CryptoRandomUuidBridgeRef, + CryptoSubtleBridgeRef, FsFacadeBridge, ProcessErrorBridgeRef, ProcessLogBridgeRef, @@ -56,6 +57,7 @@ declare const _log: ProcessLogBridgeRef; declare const _error: ProcessErrorBridgeRef; declare const _cryptoRandomFill: CryptoRandomFillBridgeRef | undefined; declare const _cryptoRandomUUID: CryptoRandomUuidBridgeRef | undefined; +declare const _cryptoSubtle: CryptoSubtleBridgeRef | undefined; // Filesystem bridge for chdir validation declare const _fs: FsFacadeBridge; // PTY setRawMode bridge ref (optional — only present when PTY is attached) @@ -1160,67 +1162,585 @@ function throwUnsupportedCryptoApi(api: "getRandomValues" | "randomUUID"): never throw new Error(`crypto.${api} is not supported in sandbox`); } +interface SandboxCryptoKeyData { + type: "public" | "private" | "secret"; + extractable: boolean; + algorithm: Record; + usages: string[]; + _pem?: string; + _jwk?: Record; + _raw?: string; + _sourceKeyObjectData?: Record; +} + +const kCryptoKeyToken = Symbol("secureExecCryptoKey"); +const kCryptoToken = Symbol("secureExecCrypto"); +const kSubtleToken = Symbol("secureExecSubtle"); +const ERR_INVALID_THIS = "ERR_INVALID_THIS"; +const ERR_ILLEGAL_CONSTRUCTOR = "ERR_ILLEGAL_CONSTRUCTOR"; + +function createNodeTypeError(message: string, code: string): TypeError & { code: string } { + const error = new TypeError(message) as TypeError & { code: string }; + error.code = code; + return error; +} + +function createDomLikeError(name: string, code: number, message: string): Error & { code: number } { + const error = new Error(message) as Error & { code: number }; + error.name = name; + error.code = code; + return error; +} + +function assertCryptoReceiver(receiver: unknown): asserts receiver is SandboxCrypto { + if (!(receiver instanceof SandboxCrypto) || (receiver as SandboxCrypto)._token !== kCryptoToken) { + throw createNodeTypeError("Value of \"this\" must be of type Crypto", ERR_INVALID_THIS); + } +} + +function assertSubtleReceiver(receiver: unknown): asserts receiver is SandboxSubtleCrypto { + if ( + !(receiver instanceof SandboxSubtleCrypto) || + (receiver as SandboxSubtleCrypto)._token !== kSubtleToken + ) { + throw createNodeTypeError("Value of \"this\" must be of type SubtleCrypto", ERR_INVALID_THIS); + } +} + +function isIntegerTypedArray(value: unknown): value is ArrayBufferView { + if (!ArrayBuffer.isView(value) || value instanceof DataView) { + return false; + } + + return ( + value instanceof Int8Array || + value instanceof Int16Array || + value instanceof Int32Array || + value instanceof Uint8Array || + value instanceof Uint16Array || + value instanceof Uint32Array || + value instanceof Uint8ClampedArray || + value instanceof BigInt64Array || + value instanceof BigUint64Array || + BufferPolyfill.isBuffer(value) + ); +} + +function toBase64(data: BufferSource | string): string { + if (typeof data === "string") { + return BufferPolyfill.from(data).toString("base64"); + } + + if (data instanceof ArrayBuffer) { + return BufferPolyfill.from(new Uint8Array(data)).toString("base64"); + } + + if (ArrayBuffer.isView(data)) { + return BufferPolyfill.from( + new Uint8Array(data.buffer, data.byteOffset, data.byteLength), + ).toString("base64"); + } + + return BufferPolyfill.from(data).toString("base64"); +} + +function toArrayBuffer(data: string): ArrayBuffer { + const buf = BufferPolyfill.from(data, "base64"); + return buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.byteLength); +} + +function normalizeAlgorithm(algorithm: unknown): Record { + if (typeof algorithm === "string") { + return { name: algorithm }; + } + + return (algorithm ?? {}) as Record; +} + +function normalizeBridgeAlgorithm(algorithm: unknown): Record { + const normalized = { ...normalizeAlgorithm(algorithm) }; + const hash = normalized.hash; + const publicExponent = normalized.publicExponent; + const iv = normalized.iv; + const additionalData = normalized.additionalData; + const salt = normalized.salt; + const info = normalized.info; + const context = normalized.context; + const label = normalized.label; + const publicKey = normalized.public; + + if (hash) { + normalized.hash = normalizeAlgorithm(hash); + } + if (publicExponent && ArrayBuffer.isView(publicExponent)) { + normalized.publicExponent = BufferPolyfill.from( + new Uint8Array( + publicExponent.buffer, + publicExponent.byteOffset, + publicExponent.byteLength, + ), + ).toString("base64"); + } + if (iv) { + normalized.iv = toBase64(iv as BufferSource); + } + if (additionalData) { + normalized.additionalData = toBase64(additionalData as BufferSource); + } + if (salt) { + normalized.salt = toBase64(salt as BufferSource); + } + if (info) { + normalized.info = toBase64(info as BufferSource); + } + if (context) { + normalized.context = toBase64(context as BufferSource); + } + if (label) { + normalized.label = toBase64(label as BufferSource); + } + if ( + publicKey && + typeof publicKey === "object" && + "_keyData" in (publicKey as Record) + ) { + normalized.public = (publicKey as SandboxCryptoKey)._keyData; + } + + return normalized; +} + +class SandboxCryptoKey { + readonly type: "public" | "private" | "secret"; + readonly extractable: boolean; + readonly algorithm: Record; + readonly usages: string[]; + readonly _keyData: SandboxCryptoKeyData; + readonly _pem?: string; + readonly _jwk?: Record; + readonly _raw?: string; + readonly _sourceKeyObjectData?: Record; + readonly [kCryptoKeyToken]: true; + + constructor(keyData?: SandboxCryptoKeyData, token?: symbol) { + if (token !== kCryptoKeyToken || !keyData) { + throw createNodeTypeError("Illegal constructor", ERR_ILLEGAL_CONSTRUCTOR); + } + + this.type = keyData.type; + this.extractable = keyData.extractable; + this.algorithm = keyData.algorithm; + this.usages = keyData.usages; + this._keyData = keyData; + this._pem = keyData._pem; + this._jwk = keyData._jwk; + this._raw = keyData._raw; + this._sourceKeyObjectData = keyData._sourceKeyObjectData; + this[kCryptoKeyToken] = true; + } +} + +Object.defineProperty(SandboxCryptoKey.prototype, Symbol.toStringTag, { + value: "CryptoKey", + configurable: true, +}); + +Object.defineProperty(SandboxCryptoKey, Symbol.hasInstance, { + value(candidate: unknown) { + return Boolean( + candidate && + typeof candidate === "object" && + ( + (candidate as { [kCryptoKeyToken]?: boolean })[kCryptoKeyToken] === true || + ( + "_keyData" in (candidate as Record) && + (candidate as { [Symbol.toStringTag]?: string })[Symbol.toStringTag] === "CryptoKey" + ) + ), + ); + }, + configurable: true, +}); + +function createCryptoKey(keyData: SandboxCryptoKeyData): SandboxCryptoKey { + const globalCryptoKey = globalThis.CryptoKey as + | ({ prototype?: object } & (new (...args: any[]) => CryptoKey)) + | undefined; + if ( + typeof globalCryptoKey === "function" && + globalCryptoKey.prototype && + globalCryptoKey.prototype !== SandboxCryptoKey.prototype + ) { + const key = Object.create(globalCryptoKey.prototype) as SandboxCryptoKey & { + type: SandboxCryptoKey["type"]; + extractable: SandboxCryptoKey["extractable"]; + algorithm: SandboxCryptoKey["algorithm"]; + usages: SandboxCryptoKey["usages"]; + _keyData: SandboxCryptoKey["_keyData"]; + _pem: SandboxCryptoKey["_pem"]; + _jwk: SandboxCryptoKey["_jwk"]; + _raw: SandboxCryptoKey["_raw"]; + _sourceKeyObjectData: SandboxCryptoKey["_sourceKeyObjectData"]; + }; + key.type = keyData.type; + key.extractable = keyData.extractable; + key.algorithm = keyData.algorithm; + key.usages = keyData.usages; + key._keyData = keyData; + key._pem = keyData._pem; + key._jwk = keyData._jwk; + key._raw = keyData._raw; + key._sourceKeyObjectData = keyData._sourceKeyObjectData; + return key; + } + return new SandboxCryptoKey(keyData, kCryptoKeyToken); +} + +function subtleCall(request: Record): string { + if (typeof _cryptoSubtle === "undefined") { + throw new Error("crypto.subtle is not supported in sandbox"); + } + + return _cryptoSubtle.applySync(undefined, [JSON.stringify(request)]); +} + +class SandboxSubtleCrypto { + readonly _token: symbol; + + constructor(token?: symbol) { + if (token !== kSubtleToken) { + throw createNodeTypeError("Illegal constructor", ERR_ILLEGAL_CONSTRUCTOR); + } + + this._token = token; + } + + digest(algorithm: unknown, data: BufferSource): Promise { + assertSubtleReceiver(this); + + return Promise.resolve().then(() => { + const result = JSON.parse( + subtleCall({ + op: "digest", + algorithm: normalizeAlgorithm(algorithm).name, + data: toBase64(data), + }), + ) as { data: string }; + return toArrayBuffer(result.data); + }); + } + + generateKey( + algorithm: unknown, + extractable: boolean, + keyUsages: Iterable, + ): Promise { + assertSubtleReceiver(this); + + return Promise.resolve().then(() => { + const result = JSON.parse( + subtleCall({ + op: "generateKey", + algorithm: normalizeBridgeAlgorithm(algorithm), + extractable, + usages: Array.from(keyUsages), + }), + ) as + | { key: SandboxCryptoKeyData } + | { publicKey: SandboxCryptoKeyData; privateKey: SandboxCryptoKeyData }; + if ("publicKey" in result && "privateKey" in result) { + return { + publicKey: createCryptoKey(result.publicKey), + privateKey: createCryptoKey(result.privateKey), + }; + } + return createCryptoKey(result.key); + }); + } + + importKey( + format: string, + keyData: BufferSource | JsonWebKey, + algorithm: unknown, + extractable: boolean, + keyUsages: Iterable, + ): Promise { + assertSubtleReceiver(this); + + return Promise.resolve().then(() => { + const result = JSON.parse( + subtleCall({ + op: "importKey", + format, + keyData: format === "jwk" ? keyData : toBase64(keyData as BufferSource), + algorithm: normalizeBridgeAlgorithm(algorithm), + extractable, + usages: Array.from(keyUsages), + }), + ) as { key: SandboxCryptoKeyData }; + return createCryptoKey(result.key); + }); + } + + exportKey(format: string, key: SandboxCryptoKey): Promise { + assertSubtleReceiver(this); + + return Promise.resolve().then(() => { + const result = JSON.parse( + subtleCall({ + op: "exportKey", + format, + key: key._keyData, + }), + ) as { data?: string; jwk?: JsonWebKey }; + if (format === "jwk") { + return result.jwk as JsonWebKey; + } + return toArrayBuffer(result.data ?? ""); + }); + } + + encrypt(algorithm: unknown, key: SandboxCryptoKey, data: BufferSource): Promise { + assertSubtleReceiver(this); + + return Promise.resolve().then(() => { + const result = JSON.parse( + subtleCall({ + op: "encrypt", + algorithm: normalizeBridgeAlgorithm(algorithm), + key: key._keyData, + data: toBase64(data), + }), + ) as { data: string }; + return toArrayBuffer(result.data); + }); + } + + decrypt(algorithm: unknown, key: SandboxCryptoKey, data: BufferSource): Promise { + assertSubtleReceiver(this); + + return Promise.resolve().then(() => { + const result = JSON.parse( + subtleCall({ + op: "decrypt", + algorithm: normalizeBridgeAlgorithm(algorithm), + key: key._keyData, + data: toBase64(data), + }), + ) as { data: string }; + return toArrayBuffer(result.data); + }); + } + + sign(algorithm: unknown, key: SandboxCryptoKey, data: BufferSource): Promise { + assertSubtleReceiver(this); + + return Promise.resolve().then(() => { + const result = JSON.parse( + subtleCall({ + op: "sign", + algorithm: normalizeBridgeAlgorithm(algorithm), + key: key._keyData, + data: toBase64(data), + }), + ) as { data: string }; + return toArrayBuffer(result.data); + }); + } + + verify( + algorithm: unknown, + key: SandboxCryptoKey, + signature: BufferSource, + data: BufferSource, + ): Promise { + assertSubtleReceiver(this); + + return Promise.resolve().then(() => { + const result = JSON.parse( + subtleCall({ + op: "verify", + algorithm: normalizeBridgeAlgorithm(algorithm), + key: key._keyData, + signature: toBase64(signature), + data: toBase64(data), + }), + ) as { result: boolean }; + return result.result; + }); + } + + deriveBits(algorithm: unknown, baseKey: SandboxCryptoKey, length: number): Promise { + assertSubtleReceiver(this); + + return Promise.resolve().then(() => { + const result = JSON.parse( + subtleCall({ + op: "deriveBits", + algorithm: normalizeBridgeAlgorithm(algorithm), + baseKey: baseKey._keyData, + length, + }), + ) as { data: string }; + return toArrayBuffer(result.data); + }); + } + + deriveKey( + algorithm: unknown, + baseKey: SandboxCryptoKey, + derivedKeyAlgorithm: unknown, + extractable: boolean, + keyUsages: Iterable, + ): Promise { + assertSubtleReceiver(this); + + return Promise.resolve().then(() => { + const result = JSON.parse( + subtleCall({ + op: "deriveKey", + algorithm: normalizeBridgeAlgorithm(algorithm), + baseKey: baseKey._keyData, + derivedKeyAlgorithm: normalizeBridgeAlgorithm(derivedKeyAlgorithm), + extractable, + usages: Array.from(keyUsages), + }), + ) as { key: SandboxCryptoKeyData }; + return createCryptoKey(result.key); + }); + } + + wrapKey( + format: string, + key: SandboxCryptoKey, + wrappingKey: SandboxCryptoKey, + wrapAlgorithm: unknown, + ): Promise { + assertSubtleReceiver(this); + + return Promise.resolve().then(() => { + const result = JSON.parse( + subtleCall({ + op: "wrapKey", + format, + key: key._keyData, + wrappingKey: wrappingKey._keyData, + wrapAlgorithm: normalizeBridgeAlgorithm(wrapAlgorithm), + }), + ) as { data: string }; + return toArrayBuffer(result.data); + }); + } + + unwrapKey( + format: string, + wrappedKey: BufferSource, + unwrappingKey: SandboxCryptoKey, + unwrapAlgorithm: unknown, + unwrappedKeyAlgorithm: unknown, + extractable: boolean, + keyUsages: Iterable, + ): Promise { + assertSubtleReceiver(this); + + return Promise.resolve().then(() => { + const result = JSON.parse( + subtleCall({ + op: "unwrapKey", + format, + wrappedKey: toBase64(wrappedKey), + unwrappingKey: unwrappingKey._keyData, + unwrapAlgorithm: normalizeBridgeAlgorithm(unwrapAlgorithm), + unwrappedKeyAlgorithm: normalizeBridgeAlgorithm(unwrappedKeyAlgorithm), + extractable, + usages: Array.from(keyUsages), + }), + ) as { key: SandboxCryptoKeyData }; + return createCryptoKey(result.key); + }); + } +} + +const subtleCrypto = new SandboxSubtleCrypto(kSubtleToken); + +class SandboxCrypto { + readonly _token: symbol; + + constructor(token?: symbol) { + if (token !== kCryptoToken) { + throw createNodeTypeError("Illegal constructor", ERR_ILLEGAL_CONSTRUCTOR); + } + + this._token = token; + } + + get subtle(): SandboxSubtleCrypto { + assertCryptoReceiver(this); + return subtleCrypto; + } + + getRandomValues(array: T): T { + assertCryptoReceiver(this); + + if (!isIntegerTypedArray(array)) { + throw createDomLikeError( + "TypeMismatchError", + 17, + "The data argument must be an integer-type TypedArray", + ); + } + + if (typeof _cryptoRandomFill === "undefined") { + throwUnsupportedCryptoApi("getRandomValues"); + } + if (array.byteLength > 65536) { + throw createDomLikeError( + "QuotaExceededError", + 22, + `The ArrayBufferView's byte length (${array.byteLength}) exceeds the number of bytes of entropy available via this API (65536)`, + ); + } + + const bytes = new Uint8Array(array.buffer, array.byteOffset, array.byteLength); + try { + const base64 = _cryptoRandomFill.applySync(undefined, [bytes.byteLength]); + const hostBytes = BufferPolyfill.from(base64, "base64"); + if (hostBytes.byteLength !== bytes.byteLength) { + throw new Error("invalid host entropy size"); + } + bytes.set(hostBytes); + return array; + } catch { + throwUnsupportedCryptoApi("getRandomValues"); + } + } + + randomUUID(): string { + assertCryptoReceiver(this); + + if (typeof _cryptoRandomUUID === "undefined") { + throwUnsupportedCryptoApi("randomUUID"); + } + try { + const uuid = _cryptoRandomUUID.applySync(undefined, []); + if (typeof uuid !== "string") { + throw new Error("invalid host uuid"); + } + return uuid; + } catch { + throwUnsupportedCryptoApi("randomUUID"); + } + } +} + +const cryptoPolyfillInstance = new SandboxCrypto(kCryptoToken); + /** * Crypto polyfill that delegates to the host for entropy. `getRandomValues` * calls the host's `_cryptoRandomFill` bridge to get cryptographically secure - * random bytes. Subtle crypto operations are unsupported. + * random bytes. Subtle crypto operations route through the host WebCrypto bridge. */ -export const cryptoPolyfill = { - getRandomValues(array: T): T { - if (typeof _cryptoRandomFill === "undefined") { - throwUnsupportedCryptoApi("getRandomValues"); - } - // Web Crypto API spec caps getRandomValues at 65536 bytes. - if (array.byteLength > 65536) { - throw new RangeError( - `The ArrayBufferView's byte length (${array.byteLength}) exceeds the number of bytes of entropy available via this API (65536)` - ); - } - const bytes = new Uint8Array( - array.buffer, - array.byteOffset, - array.byteLength - ); - try { - const base64 = _cryptoRandomFill.applySync(undefined, [bytes.byteLength]); - const hostBytes = BufferPolyfill.from(base64, "base64"); - if (hostBytes.byteLength !== bytes.byteLength) { - throw new Error("invalid host entropy size"); - } - bytes.set(hostBytes); - return array; - } catch { - throwUnsupportedCryptoApi("getRandomValues"); - } - }, - - randomUUID(): string { - if (typeof _cryptoRandomUUID === "undefined") { - throwUnsupportedCryptoApi("randomUUID"); - } - try { - const uuid = _cryptoRandomUUID.applySync(undefined, []); - if (typeof uuid !== "string") { - throw new Error("invalid host uuid"); - } - return uuid; - } catch { - throwUnsupportedCryptoApi("randomUUID"); - } - }, - - subtle: { - digest(): Promise { - throw new Error("crypto.subtle.digest is not supported in sandbox"); - }, - encrypt(): Promise { - throw new Error("crypto.subtle.encrypt is not supported in sandbox"); - }, - decrypt(): Promise { - throw new Error("crypto.subtle.decrypt is not supported in sandbox"); - }, - }, -}; +export const cryptoPolyfill = cryptoPolyfillInstance; /** * Install all process/timer/URL/Buffer/crypto polyfills onto `globalThis`. @@ -1284,6 +1804,16 @@ export function setupGlobals(): void { } // Crypto + if (typeof g.Crypto === "undefined") { + g.Crypto = SandboxCrypto; + } + if (typeof g.SubtleCrypto === "undefined") { + g.SubtleCrypto = SandboxSubtleCrypto; + } + if (typeof g.CryptoKey === "undefined") { + g.CryptoKey = SandboxCryptoKey; + } + if (typeof g.crypto === "undefined") { g.crypto = cryptoPolyfill; } else { @@ -1294,5 +1824,8 @@ export function setupGlobals(): void { if (typeof cryptoObj.randomUUID === "undefined") { cryptoObj.randomUUID = cryptoPolyfill.randomUUID; } + if (typeof cryptoObj.subtle === "undefined") { + cryptoObj.subtle = cryptoPolyfill.subtle; + } } } diff --git a/packages/nodejs/src/execution-driver.ts b/packages/nodejs/src/execution-driver.ts index 58aed808..fa8f5e89 100644 --- a/packages/nodejs/src/execution-driver.ts +++ b/packages/nodejs/src/execution-driver.ts @@ -266,19 +266,34 @@ function buildBridgeDispatchShim(): string { return ` (function() { var _origApply = Function.prototype.apply; + function encodeDispatchArgs(args) { + return JSON.stringify(args, function(_key, value) { + if (value === undefined) { + return { __secureExecDispatchType: 'undefined' }; + } + return value; + }); + } var names = ${JSON.stringify(allGlobals)}; for (var i = 0; i < names.length; i++) { var name = names[i]; if (typeof globalThis[name] === 'function') continue; (function(n) { + function reviveDispatchError(payload) { + var error = new Error(payload && payload.message ? payload.message : String(payload)); + if (payload && payload.name) error.name = payload.name; + if (payload && payload.code !== undefined) error.code = payload.code; + if (payload && payload.stack) error.stack = payload.stack; + return error; + } var fn = function() { var args = Array.prototype.slice.call(arguments); - var encoded = "__bd:" + n + ":" + JSON.stringify(args); + var encoded = "__bd:" + n + ":" + encodeDispatchArgs(args); var resultJson = _loadPolyfill.applySyncPromise(undefined, [encoded]); if (resultJson === null) return undefined; try { var parsed = JSON.parse(resultJson); - if (parsed.__bd_error) throw new Error(parsed.__bd_error); + if (parsed.__bd_error) throw reviveDispatchError(parsed.__bd_error); return parsed.__bd_result; } catch (e) { if (e.message && e.message.startsWith('No handler:')) return undefined; diff --git a/packages/secure-exec/tests/e2e-docker.test.ts b/packages/secure-exec/tests/e2e-docker.test.ts index 2810a06b..93ad997b 100644 --- a/packages/secure-exec/tests/e2e-docker.test.ts +++ b/packages/secure-exec/tests/e2e-docker.test.ts @@ -1,18 +1,9 @@ -import { execFile } from "node:child_process"; -import { createHash } from "node:crypto"; import { - access, - cp, - mkdir, readFile, readdir, - rename, - rm, - writeFile, } from "node:fs/promises"; import path from "node:path"; import { fileURLToPath } from "node:url"; -import { promisify } from "node:util"; import { afterAll, beforeAll, describe, expect, it } from "vitest"; import { allowAllEnv, @@ -21,6 +12,19 @@ import { createDefaultNetworkAdapter, NodeFileSystem, } from "../src/index.js"; +import { + assertPathExists, + type CapturedConsoleEvent, + formatConsoleChannel, + formatErrorOutput, + isRecord, + normalizeEnvelope, + pathExists, + type PreparedFixture, + prepareFixtureProject as prepareSharedFixtureProject, + type ResultEnvelope, + runHostExecution, +} from "./project-matrix/shared.js"; import { createTestNodeRuntime } from "./test-utils.js"; import { buildImage, @@ -30,10 +34,7 @@ import { type Container, } from "./utils/docker.js"; -const execFileAsync = promisify(execFile); const TEST_TIMEOUT_MS = 55_000; -const COMMAND_TIMEOUT_MS = 45_000; -const CACHE_READY_MARKER = ".ready"; const TESTS_ROOT = path.dirname(fileURLToPath(import.meta.url)); const PACKAGE_ROOT = path.resolve(TESTS_ROOT, ".."); @@ -79,23 +80,6 @@ type FixtureProject = { metadata: FixtureMetadata; }; -type PreparedFixture = { - cacheHit: boolean; - cacheKey: string; - projectDir: string; -}; - -type ResultEnvelope = { - code: number; - stdout: string; - stderr: string; -}; - -type CapturedConsoleEvent = { - channel: "stdout" | "stderr"; - message: string; -}; - type ServiceConnection = { host: string; port: number }; type ServiceConnections = Partial>; @@ -243,6 +227,7 @@ describe.skipIf(skipReason)("e2e-docker", () => { fixture.metadata.entry, serviceEnv, ); + assertHostFixtureBaseline(host); const sandbox = await runSandboxExecution( prepared.projectDir, fixture.metadata.entry, @@ -257,7 +242,6 @@ describe.skipIf(skipReason)("e2e-docker", () => { } // Fail expectation: host should succeed, sandbox should fail predictably - expect(host.code).toBe(0); expect(sandbox.code).toBe(fixture.metadata.fail.code); expect(sandbox.stderr).toContain( fixture.metadata.fail.stderrIncludes, @@ -268,6 +252,11 @@ describe.skipIf(skipReason)("e2e-docker", () => { } }); +function assertHostFixtureBaseline(host: ResultEnvelope): void { + // Validate the fixture in plain Node before treating any mismatch as a sandbox bug. + expect(host.code).toBe(0); +} + /* ------------------------------------------------------------------ */ /* Service env var injection */ /* ------------------------------------------------------------------ */ @@ -424,133 +413,18 @@ function parseFixtureMetadata( async function prepareFixtureProject( fixture: FixtureProject, ): Promise { - await mkdir(CACHE_ROOT, { recursive: true }); - const cacheKey = await createFixtureCacheKey(fixture); - const cacheDir = path.join(CACHE_ROOT, `${fixture.name}-${cacheKey}`); - const readyMarkerPath = path.join(cacheDir, CACHE_READY_MARKER); - - if (await pathExists(readyMarkerPath)) { - return { cacheHit: true, cacheKey, projectDir: cacheDir }; - } - - if (await pathExists(cacheDir)) { - await rm(cacheDir, { recursive: true, force: true }); - } - - // Prepare staging directory and install deps - const stagingDir = `${cacheDir}.tmp-${process.pid}-${Date.now()}`; - await rm(stagingDir, { recursive: true, force: true }); - await cp(fixture.sourceDir, stagingDir, { - recursive: true, - filter: (source) => !isNodeModulesPath(source), + return prepareSharedFixtureProject({ + cacheRoot: CACHE_ROOT, + workspaceRoot: WORKSPACE_ROOT, + fixtureName: fixture.name, + sourceDir: fixture.sourceDir, }); - - await execFileAsync( - "pnpm", - ["install", "--ignore-workspace", "--prefer-offline"], - { - cwd: stagingDir, - timeout: COMMAND_TIMEOUT_MS, - maxBuffer: 10 * 1024 * 1024, - }, - ); - - await writeFile( - path.join(stagingDir, CACHE_READY_MARKER), - `${new Date().toISOString()}\n`, - ); - - // Promote staging to cache - try { - await rename(stagingDir, cacheDir); - } catch (error) { - const code = - error && typeof error === "object" && "code" in error - ? String(error.code) - : ""; - if (code !== "EEXIST") throw error; - await rm(stagingDir, { recursive: true, force: true }); - if (!(await pathExists(readyMarkerPath))) { - throw new Error( - `Cache entry race produced missing ready marker: ${cacheDir}`, - ); - } - } - - return { cacheHit: false, cacheKey, projectDir: cacheDir }; -} - -async function createFixtureCacheKey( - fixture: FixtureProject, -): Promise { - const hash = createHash("sha256"); - const nodeMajor = process.versions.node.split(".")[0] ?? "0"; - hash.update(`node-major:${nodeMajor}\n`); - hash.update(`platform:${process.platform}\n`); - hash.update(`arch:${process.arch}\n`); - - await hashOptionalFile( - hash, - "workspace-lock", - path.join(WORKSPACE_ROOT, "pnpm-lock.yaml"), - ); - await hashOptionalFile( - hash, - "fixture-package", - path.join(fixture.sourceDir, "package.json"), - ); - await hashOptionalFile( - hash, - "fixture-lock", - path.join(fixture.sourceDir, "pnpm-lock.yaml"), - ); - - const files = await listFixtureFiles(fixture.sourceDir); - for (const relativePath of files) { - const absolutePath = path.join(fixture.sourceDir, relativePath); - const content = await readFile(absolutePath); - hash.update(`fixture-file:${toPosixPath(relativePath)}\n`); - hash.update(content); - hash.update("\n"); - } - - return hash.digest("hex").slice(0, 16); } /* ------------------------------------------------------------------ */ /* Execution */ /* ------------------------------------------------------------------ */ -function formatConsoleChannel( - events: CapturedConsoleEvent[], - channel: CapturedConsoleEvent["channel"], -): string { - const lines = events - .filter((event) => event.channel === channel) - .map((event) => event.message); - return lines.join("\n") + (lines.length > 0 ? "\n" : ""); -} - -function formatErrorOutput(errorMessage: string | undefined): string { - if (!errorMessage) return ""; - return errorMessage.endsWith("\n") ? errorMessage : `${errorMessage}\n`; -} - -async function runHostExecution( - projectDir: string, - entryRelativePath: string, - serviceEnv: Record, -): Promise { - const entryPath = path.join(projectDir, entryRelativePath); - const result = await runCommand( - process.execPath, - [entryPath], - projectDir, - serviceEnv, - ); - return normalizeEnvelope(result, projectDir); -} - async function runSandboxExecution( projectDir: string, entryRelativePath: string, @@ -594,144 +468,6 @@ async function runSandboxExecution( } } -async function runCommand( - command: string, - args: string[], - cwd: string, - extraEnv: Record, -): Promise { - try { - const result = await execFileAsync(command, args, { - cwd, - timeout: COMMAND_TIMEOUT_MS, - maxBuffer: 10 * 1024 * 1024, - env: { ...process.env, ...extraEnv }, - }); - return { code: 0, stdout: result.stdout, stderr: result.stderr }; - } catch (error: unknown) { - if (!isExecError(error)) throw error; - return { - code: typeof error.code === "number" ? error.code : 1, - stdout: typeof error.stdout === "string" ? error.stdout : "", - stderr: typeof error.stderr === "string" ? error.stderr : "", - }; - } -} - -/* ------------------------------------------------------------------ */ -/* Normalization */ -/* ------------------------------------------------------------------ */ - -function normalizeEnvelope( - envelope: ResultEnvelope, - projectDir: string, -): ResultEnvelope { - return { - code: envelope.code, - stdout: normalizeText(envelope.stdout, projectDir), - stderr: normalizeText(envelope.stderr, projectDir), - }; -} - -function normalizeText(value: string, projectDir: string): string { - const normalized = value.replace(/\r\n/g, "\n"); - const projectDirPosix = toPosixPath(projectDir); - return normalized - .split(projectDir) - .join("") - .split(projectDirPosix) - .join(""); -} - /* ------------------------------------------------------------------ */ /* Helpers */ /* ------------------------------------------------------------------ */ - -async function hashOptionalFile( - hash: ReturnType, - label: string, - filePath: string, -): Promise { - hash.update(`${label}:`); - try { - const content = await readFile(filePath); - hash.update(content); - } catch (error) { - if (!isNotFoundError(error)) throw error; - hash.update(""); - } - hash.update("\n"); -} - -async function listFixtureFiles(rootDir: string): Promise { - const files: string[] = []; - - async function walk(relativeDir: string): Promise { - const directory = path.join(rootDir, relativeDir); - const entries = await readdir(directory, { withFileTypes: true }); - const sortedEntries = entries - .filter((entry) => !isNodeModulesPath(entry.name)) - .sort((left, right) => left.name.localeCompare(right.name)); - - for (const entry of sortedEntries) { - const relativePath = relativeDir - ? path.join(relativeDir, entry.name) - : entry.name; - if (entry.isDirectory()) { - await walk(relativePath); - continue; - } - if (entry.isFile()) files.push(relativePath); - } - } - - await walk(""); - return files.sort((left, right) => left.localeCompare(right)); -} - -async function assertPathExists( - pathname: string, - message: string, -): Promise { - try { - await access(pathname); - } catch { - throw new Error(message); - } -} - -async function pathExists(pathname: string): Promise { - try { - await access(pathname); - return true; - } catch { - return false; - } -} - -function isNodeModulesPath(value: string): boolean { - return value.split(path.sep).includes("node_modules"); -} - -function isRecord(value: unknown): value is Record { - return Boolean(value) && typeof value === "object" && !Array.isArray(value); -} - -function isNotFoundError(value: unknown): boolean { - return ( - Boolean(value) && - typeof value === "object" && - "code" in value && - String(value.code) === "ENOENT" - ); -} - -function isExecError( - value: unknown, -): value is { code?: number; stdout?: string; stderr?: string } { - return Boolean(value) && typeof value === "object" && "stdout" in value; -} - -function toPosixPath(value: string): string { - return value.split(path.sep).join(path.posix.sep); -} diff --git a/packages/secure-exec/tests/kernel/e2e-project-matrix.test.ts b/packages/secure-exec/tests/kernel/e2e-project-matrix.test.ts index 4ade98c0..57c06880 100644 --- a/packages/secure-exec/tests/kernel/e2e-project-matrix.test.ts +++ b/packages/secure-exec/tests/kernel/e2e-project-matrix.test.ts @@ -2,388 +2,161 @@ * E2E project-matrix test: run existing fixture projects through the kernel. * * For each fixture in tests/projects/: - * 1. Prepare project (npm install, cached by content hash) + * 1. Prepare project (package-manager install, cached by content hash) * 2. Run entry via host Node (baseline) * 3. Run entry via kernel (NodeFileSystem rooted at project dir, WasmVM + Node) * 4. Compare output parity - * - * Uses relative imports to avoid cyclic package dependencies. */ -import { execFile } from 'node:child_process'; -import { createHash } from 'node:crypto'; -import { access, cp, mkdir, readFile, readdir, rename, rm, writeFile } from 'node:fs/promises'; -import path from 'node:path'; -import { fileURLToPath } from 'node:url'; -import { promisify } from 'node:util'; -import { describe, expect, it } from 'vitest'; -import { createKernel } from '../../../kernel/src/index.ts'; -import { NodeFileSystem } from '../../../os/node/src/index.ts'; -import { createWasmVmRuntime } from '../../../runtime/wasmvm/src/index.ts'; -import { createNodeRuntime } from '../../../runtime/node/src/index.ts'; -import { skipUnlessWasmBuilt } from './helpers.ts'; +import { readFile, readdir } from "node:fs/promises"; +import path from "node:path"; +import { fileURLToPath } from "node:url"; +import { describe, expect, it } from "vitest"; +import { createKernel } from "../../../core/src/index.ts"; +import { + createNodeRuntime, + NodeFileSystem, +} from "../../../nodejs/src/index.ts"; +import { createWasmVmRuntime } from "../../../wasmvm/src/index.ts"; +import { + assertPathExists, + type PackageManagerFixtureMetadata, + parsePackageManagerFixtureMetadata, + type PreparedFixture, + type ResultEnvelope, + prepareFixtureProject as prepareSharedFixtureProject, + runHostExecution, +} from "../project-matrix/shared.js"; +import { skipUnlessWasmBuilt } from "./helpers.ts"; -const execFileAsync = promisify(execFile); const TEST_TIMEOUT_MS = 55_000; -const COMMAND_TIMEOUT_MS = 45_000; -const CACHE_READY_MARKER = '.ready'; const __dirname = path.dirname(fileURLToPath(import.meta.url)); -const TESTS_ROOT = path.resolve(__dirname, '..'); -const PACKAGE_ROOT = path.resolve(TESTS_ROOT, '..'); -const WORKSPACE_ROOT = path.resolve(PACKAGE_ROOT, '..', '..'); -const FIXTURES_ROOT = path.join(TESTS_ROOT, 'projects'); -const CACHE_ROOT = path.join(PACKAGE_ROOT, '.cache', 'project-matrix'); +const TESTS_ROOT = path.resolve(__dirname, ".."); +const PACKAGE_ROOT = path.resolve(TESTS_ROOT, ".."); +const WORKSPACE_ROOT = path.resolve(PACKAGE_ROOT, "..", ".."); +const FIXTURES_ROOT = path.join(TESTS_ROOT, "projects"); +const CACHE_ROOT = path.join(PACKAGE_ROOT, ".cache", "project-matrix"); const COMMANDS_DIR = path.resolve( - __dirname, - '../../../../wasmvm/target/wasm32-wasip1/release/commands', + __dirname, + "../../../wasmvm/target/wasm32-wasip1/release/commands", ); -// --------------------------------------------------------------------------- -// Types (same schema as project-matrix.test.ts) -// --------------------------------------------------------------------------- - -type PackageManager = 'pnpm' | 'npm' | 'bun' | 'yarn'; -type PassFixtureMetadata = { entry: string; expectation: 'pass'; packageManager?: PackageManager }; -type FailFixtureMetadata = { - entry: string; - expectation: 'fail'; - fail: { code: number; stderrIncludes: string }; - packageManager?: PackageManager; +type FixtureProject = { + name: string; + sourceDir: string; + metadata: PackageManagerFixtureMetadata; }; -type FixtureMetadata = PassFixtureMetadata | FailFixtureMetadata; -type FixtureProject = { name: string; sourceDir: string; metadata: FixtureMetadata }; -type PreparedFixture = { cacheHit: boolean; cacheKey: string; projectDir: string }; -type ResultEnvelope = { code: number; stdout: string; stderr: string }; - -// --------------------------------------------------------------------------- -// Fixture discovery (same logic as project-matrix.test.ts) -// --------------------------------------------------------------------------- async function discoverFixtures(): Promise { - const entries = await readdir(FIXTURES_ROOT, { withFileTypes: true }); - const fixtureDirs = entries - .filter((e) => e.isDirectory()) - .map((e) => e.name) - .sort((a, b) => a.localeCompare(b)); - - const fixtures: FixtureProject[] = []; - for (const name of fixtureDirs) { - const sourceDir = path.join(FIXTURES_ROOT, name); - const metaPath = path.join(sourceDir, 'fixture.json'); - const raw = JSON.parse(await readFile(metaPath, 'utf8')); - const metadata = parseMetadata(raw, name); - fixtures.push({ name, sourceDir, metadata }); - } - return fixtures; -} - -function parseMetadata(raw: Record, name: string): FixtureMetadata { - const entry = raw.entry as string; - const packageManager = raw.packageManager as PackageManager | undefined; - if (raw.expectation === 'pass') return { entry, expectation: 'pass', ...(packageManager && { packageManager }) }; - const fail = raw.fail as { code: number; stderrIncludes: string }; - return { entry, expectation: 'fail', fail, ...(packageManager && { packageManager }) }; -} - -// --------------------------------------------------------------------------- -// Fixture preparation (reuses same cache as project-matrix.test.ts) -// --------------------------------------------------------------------------- - -async function prepareFixtureProject(fixture: FixtureProject): Promise { - await mkdir(CACHE_ROOT, { recursive: true }); - const cacheKey = await createFixtureCacheKey(fixture); - const cacheDir = path.join(CACHE_ROOT, `${fixture.name}-${cacheKey}`); - const readyMarker = path.join(cacheDir, CACHE_READY_MARKER); - - if (await pathExists(readyMarker)) { - return { cacheHit: true, cacheKey, projectDir: cacheDir }; - } - - // Reset stale entries - if (await pathExists(cacheDir)) { - await rm(cacheDir, { recursive: true, force: true }); - } - - // Stage and install - const staging = `${cacheDir}.tmp-${process.pid}-${Date.now()}`; - await rm(staging, { recursive: true, force: true }); - await cp(fixture.sourceDir, staging, { - recursive: true, - filter: (src) => !src.split(path.sep).includes('node_modules'), - }); - const pm = fixture.metadata.packageManager ?? 'pnpm'; - const installCmd = - pm === 'npm' - ? { cmd: 'npm', args: ['install', '--prefer-offline'] } - : pm === 'bun' - ? { cmd: 'bun', args: ['install'] } - : pm === 'yarn' - ? await getYarnInstallCmd(staging) - : { cmd: 'pnpm', args: ['install', '--ignore-workspace', '--prefer-offline'] }; - await execFileAsync(installCmd.cmd, installCmd.args, { - cwd: staging, - timeout: COMMAND_TIMEOUT_MS, - maxBuffer: 10 * 1024 * 1024, - ...(pm === 'yarn' && { env: yarnEnv }), - }); - await writeFile(path.join(staging, CACHE_READY_MARKER), `${new Date().toISOString()}\n`); - - // Promote - try { - await rename(staging, cacheDir); - } catch (err: unknown) { - const code = err && typeof err === 'object' && 'code' in err ? String(err.code) : ''; - if (code !== 'EEXIST') throw err; - await rm(staging, { recursive: true, force: true }); - if (!(await pathExists(readyMarker))) { - throw new Error(`Cache race: missing ready marker at ${cacheDir}`); - } - } - - return { cacheHit: false, cacheKey, projectDir: cacheDir }; + const entries = await readdir(FIXTURES_ROOT, { withFileTypes: true }); + const fixtureDirs = entries + .filter((entry) => entry.isDirectory()) + .map((entry) => entry.name) + .sort((left, right) => left.localeCompare(right)); + + const fixtures: FixtureProject[] = []; + for (const fixtureName of fixtureDirs) { + const sourceDir = path.join(FIXTURES_ROOT, fixtureName); + const metadataPath = path.join(sourceDir, "fixture.json"); + const metadataText = await readFile(metadataPath, "utf8"); + const metadata = parsePackageManagerFixtureMetadata( + JSON.parse(metadataText) as unknown, + fixtureName, + ); + const entryPath = path.join(sourceDir, metadata.entry); + await assertPathExists( + entryPath, + `Fixture "${fixtureName}" entry file not found: ${metadata.entry}`, + ); + await assertPathExists( + path.join(sourceDir, "package.json"), + `Fixture "${fixtureName}" requires package.json`, + ); + fixtures.push({ name: fixtureName, sourceDir, metadata }); + } + + return fixtures; +} + +async function prepareFixtureProject( + fixture: FixtureProject, +): Promise { + return prepareSharedFixtureProject({ + cacheRoot: CACHE_ROOT, + workspaceRoot: WORKSPACE_ROOT, + fixtureName: fixture.name, + sourceDir: fixture.sourceDir, + packageManager: fixture.metadata.packageManager, + }); +} + +async function runKernelExecution( + projectDir: string, + entryRelativePath: string, +): Promise { + const vfs = new NodeFileSystem({ root: projectDir }); + const kernel = createKernel({ filesystem: vfs, cwd: "/" }); + + await kernel.mount(createWasmVmRuntime({ commandDirs: [COMMANDS_DIR] })); + await kernel.mount(createNodeRuntime()); + + try { + const vfsEntry = `/${entryRelativePath.replace(/\\/g, "/")}`; + const result = await kernel.exec(`node ${vfsEntry}`, { cwd: "/" }); + return { + code: result.exitCode, + stdout: result.stdout, + stderr: result.stderr, + }; + } finally { + await kernel.dispose(); + } +} + +function assertHostFixtureBaseline(host: ResultEnvelope): void { + // Validate the fixture in plain Node before treating any mismatch as a sandbox bug. + expect(host.code).toBe(0); } -async function createFixtureCacheKey(fixture: FixtureProject): Promise { - const hash = createHash('sha256'); - const nodeMajor = process.versions.node.split('.')[0] ?? '0'; - const pm = fixture.metadata.packageManager ?? 'pnpm'; - const pmVersion = - pm === 'npm' - ? await getNpmVersion() - : pm === 'bun' - ? await getBunVersion() - : pm === 'yarn' - ? await getYarnVersion() - : await getPnpmVersion(); - hash.update(`node-major:${nodeMajor}\n`); - hash.update(`pm:${pm}\n`); - hash.update(`pm-version:${pmVersion}\n`); - hash.update(`platform:${process.platform}\n`); - hash.update(`arch:${process.arch}\n`); - - const lockFile = - pm === 'npm' - ? 'package-lock.json' - : pm === 'bun' - ? 'bun.lock' - : pm === 'yarn' - ? 'yarn.lock' - : 'pnpm-lock.yaml'; - for (const [label, filePath] of [ - ['workspace-lock', path.join(WORKSPACE_ROOT, 'pnpm-lock.yaml')], - ['workspace-package', path.join(WORKSPACE_ROOT, 'package.json')], - ['fixture-package', path.join(fixture.sourceDir, 'package.json')], - ['fixture-lock', path.join(fixture.sourceDir, lockFile)], - ]) { - hash.update(`${label}:`); - try { hash.update(await readFile(filePath)); } catch { hash.update(''); } - hash.update('\n'); - } - - const files = await listFiles(fixture.sourceDir); - for (const rel of files) { - hash.update(`fixture-file:${rel.split(path.sep).join('/')}\n`); - hash.update(await readFile(path.join(fixture.sourceDir, rel))); - hash.update('\n'); - } - - return hash.digest('hex').slice(0, 16); -} - -let _pnpmVersionPromise: Promise | undefined; -function getPnpmVersion(): Promise { - if (!_pnpmVersionPromise) { - _pnpmVersionPromise = execFileAsync('pnpm', ['--version'], { - cwd: WORKSPACE_ROOT, - timeout: COMMAND_TIMEOUT_MS, - }).then((r) => r.stdout.trim()); - } - return _pnpmVersionPromise; -} - -let _npmVersionPromise: Promise | undefined; -function getNpmVersion(): Promise { - if (!_npmVersionPromise) { - _npmVersionPromise = execFileAsync('npm', ['--version'], { - cwd: WORKSPACE_ROOT, - timeout: COMMAND_TIMEOUT_MS, - }).then((r) => r.stdout.trim()); - } - return _npmVersionPromise; -} - -let _bunVersionPromise: Promise | undefined; -function getBunVersion(): Promise { - if (!_bunVersionPromise) { - _bunVersionPromise = execFileAsync('bun', ['--version'], { - cwd: WORKSPACE_ROOT, - timeout: COMMAND_TIMEOUT_MS, - }).then((r) => r.stdout.trim()); - } - return _bunVersionPromise; -} - -let _yarnVersionPromise: Promise | undefined; -// Bypass corepack packageManager enforcement so yarn runs in a pnpm workspace. -const yarnEnv = { ...process.env, COREPACK_ENABLE_STRICT: '0' }; -function getYarnVersion(): Promise { - if (!_yarnVersionPromise) { - _yarnVersionPromise = execFileAsync('yarn', ['--version'], { - cwd: WORKSPACE_ROOT, - timeout: COMMAND_TIMEOUT_MS, - env: yarnEnv, - }).then((r) => r.stdout.trim()); - } - return _yarnVersionPromise; -} - -async function getYarnInstallCmd( - projectDir: string, -): Promise<{ cmd: string; args: string[] }> { - const isBerry = await pathExists(path.join(projectDir, '.yarnrc.yml')); - return isBerry - ? { cmd: 'yarn', args: ['install', '--immutable'] } - : { cmd: 'yarn', args: ['install'] }; -} - -async function listFiles(root: string): Promise { - const result: string[] = []; - async function walk(rel: string): Promise { - const dir = path.join(root, rel); - const entries = await readdir(dir, { withFileTypes: true }); - for (const e of entries.sort((a, b) => a.name.localeCompare(b.name))) { - if (e.name === 'node_modules') continue; - const p = rel ? path.join(rel, e.name) : e.name; - if (e.isDirectory()) await walk(p); - else if (e.isFile()) result.push(p); - } - } - await walk(''); - return result.sort((a, b) => a.localeCompare(b)); -} - -// --------------------------------------------------------------------------- -// Host execution (baseline — same as project-matrix.test.ts) -// --------------------------------------------------------------------------- - -async function runHostExecution(projectDir: string, entryRel: string): Promise { - const entryPath = path.join(projectDir, entryRel); - return normalizeEnvelope(await runCommand(process.execPath, [entryPath], projectDir), projectDir); -} - -async function runCommand(cmd: string, args: string[], cwd: string): Promise { - try { - const r = await execFileAsync(cmd, args, { cwd, timeout: COMMAND_TIMEOUT_MS, maxBuffer: 10 * 1024 * 1024 }); - return { code: 0, stdout: r.stdout, stderr: r.stderr }; - } catch (err: unknown) { - if (err && typeof err === 'object' && 'stdout' in err) { - const e = err as { code?: number; stdout?: string; stderr?: string }; - return { - code: typeof e.code === 'number' ? e.code : 1, - stdout: typeof e.stdout === 'string' ? e.stdout : '', - stderr: typeof e.stderr === 'string' ? e.stderr : '', - }; - } - throw err; - } -} - -// --------------------------------------------------------------------------- -// Kernel execution -// --------------------------------------------------------------------------- - -async function runKernelExecution(projectDir: string, entryRel: string): Promise { - // NodeFileSystem rooted at projectDir — require() resolves from node_modules on disk - const vfs = new NodeFileSystem({ root: projectDir }); - const kernel = createKernel({ filesystem: vfs, cwd: '/' }); - - await kernel.mount(createWasmVmRuntime({ commandDirs: [COMMANDS_DIR] })); - await kernel.mount(createNodeRuntime()); - - try { - const vfsEntry = '/' + entryRel.replace(/\\/g, '/'); - const result = await kernel.exec(`node ${vfsEntry}`, { cwd: '/' }); - return normalizeEnvelope( - { code: result.exitCode, stdout: result.stdout, stderr: result.stderr }, - projectDir, - ); - } finally { - await kernel.dispose(); - } -} - -// --------------------------------------------------------------------------- -// Output normalization (same as project-matrix.test.ts) -// --------------------------------------------------------------------------- - -function normalizeEnvelope(envelope: ResultEnvelope, projectDir: string): ResultEnvelope { - return { - code: envelope.code, - stdout: normalizeText(envelope.stdout, projectDir), - stderr: normalizeText(envelope.stderr, projectDir), - }; -} - -function normalizeText(value: string, projectDir: string): string { - const normalized = value.replace(/\r\n/g, '\n'); - const posixDir = projectDir.split(path.sep).join(path.posix.sep); - return normalizeModuleNotFoundText( - normalized.split(projectDir).join('').split(posixDir).join(''), - ); -} - -function normalizeModuleNotFoundText(value: string): string { - if (!value.includes('Cannot find module')) return value; - const quoted = value.match(/Cannot find module '([^']+)'/); - if (quoted) return `Cannot find module '${quoted[1]}'\n`; - const from = value.match(/Cannot find module:\s*([^\s]+)\s+from\s+/); - if (from) return `Cannot find module '${from[1]}'\n`; - return value; -} - -// --------------------------------------------------------------------------- -// Helpers -// --------------------------------------------------------------------------- - -async function pathExists(p: string): Promise { - try { await access(p); return true; } catch { return false; } -} - -// --------------------------------------------------------------------------- -// Tests -// --------------------------------------------------------------------------- - const skipReason = skipUnlessWasmBuilt(); const discoveredFixtures = await discoverFixtures(); -describe.skipIf(skipReason)('e2e project-matrix through kernel', () => { - it('discovers at least one fixture project', () => { - expect(discoveredFixtures.length).toBeGreaterThan(0); - }); - - for (const fixture of discoveredFixtures) { - it( - `runs fixture ${fixture.name} through kernel with host-node parity`, - async () => { - const prepared = await prepareFixtureProject(fixture); - const host = await runHostExecution(prepared.projectDir, fixture.metadata.entry); - const kernel = await runKernelExecution(prepared.projectDir, fixture.metadata.entry); - - if (fixture.metadata.expectation === 'pass') { - expect(kernel.code).toBe(host.code); - expect(kernel.stdout).toBe(host.stdout); - expect(kernel.stderr).toBe(host.stderr); - return; - } - - // Fail fixtures: host succeeds, kernel enforces sandbox restrictions - expect(host.code).toBe(0); - expect(kernel.code).toBe(fixture.metadata.fail.code); - expect(kernel.stderr).toContain(fixture.metadata.fail.stderrIncludes); - }, - TEST_TIMEOUT_MS, - ); - } +describe.skipIf(skipReason)("e2e project-matrix through kernel", () => { + it("discovers at least one fixture project", () => { + expect(discoveredFixtures.length).toBeGreaterThan(0); + }); + + for (const fixture of discoveredFixtures) { + it( + `runs fixture ${fixture.name} through kernel with host-node parity`, + async () => { + const prepared = await prepareFixtureProject(fixture); + const host = await runHostExecution( + prepared.projectDir, + fixture.metadata.entry, + ); + assertHostFixtureBaseline(host); + + const kernel = await runKernelExecution( + prepared.projectDir, + fixture.metadata.entry, + ); + + if (fixture.metadata.expectation === "pass") { + expect(kernel.code).toBe(0); + expect(kernel.stdout).toBe(host.stdout); + expect(kernel.stderr).toBe(host.stderr); + return; + } + + expect(kernel.code).toBe(fixture.metadata.fail.code); + expect(kernel.stderr).toContain( + fixture.metadata.fail.stderrIncludes, + ); + }, + TEST_TIMEOUT_MS, + ); + } }); diff --git a/packages/secure-exec/tests/node-conformance/common/countdown.js b/packages/secure-exec/tests/node-conformance/common/countdown.js new file mode 100644 index 00000000..1f4c48e3 --- /dev/null +++ b/packages/secure-exec/tests/node-conformance/common/countdown.js @@ -0,0 +1,19 @@ +'use strict'; + +module.exports = class Countdown { + constructor(limit, callback) { + this.remaining = limit; + this.callback = callback; + } + + dec() { + if (this.remaining <= 0) { + return 0; + } + this.remaining -= 1; + if (this.remaining === 0 && typeof this.callback === 'function') { + this.callback(); + } + return this.remaining; + } +}; diff --git a/packages/secure-exec/tests/node-conformance/common/crypto.js b/packages/secure-exec/tests/node-conformance/common/crypto.js index 405e7f5c..c41aa5e0 100644 --- a/packages/secure-exec/tests/node-conformance/common/crypto.js +++ b/packages/secure-exec/tests/node-conformance/common/crypto.js @@ -1,17 +1,89 @@ 'use strict'; -// Crypto helper for Node.js conformance tests -// Sandbox uses crypto-browserify, not OpenSSL +const assert = require('assert'); +const crypto = require('crypto'); -function hasOpenSSL(major, minor) { - // crypto-browserify doesn't have OpenSSL version info - // Return false for all version checks — tests skip OpenSSL-specific sections - return false; +// Crypto helper shim for vendored Node.js conformance tests. +// Keep these helpers close to the upstream common/crypto.js surface used by +// the imported tests so keygen/sign/encrypt assertions can run unchanged. + +const opensslVersion = String(process.versions?.openssl || ''); +const opensslParts = opensslVersion + .replace(/[^0-9.].*$/, '') + .split('.') + .map((part) => Number(part) || 0); + +function hasOpenSSL(major, minor = 0, patch = 0) { + const [currentMajor, currentMinor, currentPatch] = opensslParts; + if (currentMajor > major) return true; + if (currentMajor < major) return false; + if (currentMinor > minor) return true; + if (currentMinor < minor) return false; + return currentPatch >= patch; +} + +const hasOpenSSL3 = hasOpenSSL(3, 0, 0); + +const pkcs1PubExp = /-----BEGIN RSA PUBLIC KEY-----/; +const pkcs1PrivExp = /-----BEGIN RSA PRIVATE KEY-----/; +const pkcs8Exp = /-----BEGIN PRIVATE KEY-----/; +const spkiExp = /-----BEGIN PUBLIC KEY-----/; +const sec1Exp = /-----BEGIN EC PRIVATE KEY-----/; +const pkcs8EncExp = /-----BEGIN ENCRYPTED PRIVATE KEY-----/; +function sec1EncExp(cipher) { + const suffix = cipher ? `[\\s\\S]*${cipher}` : ''; + return new RegExp(`-----BEGIN EC PRIVATE KEY-----${suffix}`, 'i'); +} + +function pkcs1EncExp(cipher) { + const suffix = cipher ? `[\\s\\S]*${cipher}` : ''; + return new RegExp(`-----BEGIN RSA PRIVATE KEY-----${suffix}`, 'i'); } -const hasOpenSSL3 = false; +function getValueSize(value) { + if (Buffer.isBuffer(value) || ArrayBuffer.isView(value)) { + return value.byteLength; + } + if (value instanceof ArrayBuffer) { + return value.byteLength; + } + return String(value).length; +} + +function assertApproximateSize(value, expected) { + const actual = getValueSize(value); + const tolerance = Math.max(32, Math.ceil(expected * 0.35)); + assert.ok( + Math.abs(actual - expected) <= tolerance, + `Expected size near ${expected}, got ${actual}` + ); +} + +function testEncryptDecrypt(publicKey, privateKey) { + const plaintext = Buffer.from('secure-exec'); + const encrypted = crypto.publicEncrypt(publicKey, plaintext); + const decrypted = crypto.privateDecrypt(privateKey, encrypted); + assert.ok(plaintext.equals(decrypted)); +} + +function testSignVerify(publicKey, privateKey) { + const plaintext = Buffer.from('secure-exec'); + const signature = crypto.sign('sha256', plaintext, privateKey); + assert.strictEqual(crypto.verify('sha256', plaintext, publicKey, signature), true); +} module.exports = { + assertApproximateSize, hasOpenSSL, hasOpenSSL3, + pkcs1EncExp, + pkcs1PrivExp, + pkcs1PubExp, + pkcs8Exp, + pkcs8EncExp, + sec1EncExp, + sec1Exp, + spkiExp, + testEncryptDecrypt, + testSignVerify, }; diff --git a/packages/secure-exec/tests/node-conformance/conformance-report.json b/packages/secure-exec/tests/node-conformance/conformance-report.json index e984d3f7..b6aee7e2 100644 --- a/packages/secure-exec/tests/node-conformance/conformance-report.json +++ b/packages/secure-exec/tests/node-conformance/conformance-report.json @@ -1,17 +1,17 @@ { "nodeVersion": "22.14.0", "sourceCommit": "v22.14.0", - "lastUpdated": "2026-03-25", - "generatedAt": "2026-03-25", + "lastUpdated": "2026-03-26", + "generatedAt": "2026-03-26", "summary": { "total": 3532, - "pass": 738, - "genuinePass": 704, - "vacuousPass": 34, - "fail": 2723, + "pass": 787, + "genuinePass": 754, + "vacuousPass": 33, + "fail": 2674, "skip": 71, - "passRate": "20.9%", - "genuinePassRate": "19.9%" + "passRate": "22.3%", + "genuinePassRate": "21.3%" }, "modules": { "abortcontroller": { @@ -247,9 +247,9 @@ }, "crypto": { "total": 99, - "pass": 16, - "vacuousPass": 13, - "fail": 83, + "pass": 56, + "vacuousPass": 12, + "fail": 43, "skip": 0 }, "cwd": { @@ -569,9 +569,9 @@ }, "global": { "total": 11, - "pass": 2, + "pass": 3, "vacuousPass": 0, - "fail": 9, + "fail": 8, "skip": 0 }, "h2": { @@ -618,9 +618,9 @@ }, "http": { "total": 377, - "pass": 237, + "pass": 243, "vacuousPass": 1, - "fail": 139, + "fail": 133, "skip": 1 }, "http2": { @@ -1395,9 +1395,9 @@ }, "webcrypto": { "total": 28, - "pass": 15, + "pass": 17, "vacuousPass": 0, - "fail": 13, + "fail": 11, "skip": 0 }, "websocket": { @@ -1472,14 +1472,14 @@ } }, "categories": { - "implementation-gap": 1422, + "implementation-gap": 1372, "native-addon": 3, "requires-exec-path": 200, "requires-v8-flags": 239, - "security-constraint": 1, + "security-constraint": 2, "test-infra": 68, - "unsupported-api": 124, - "unsupported-module": 737, - "vacuous-skip": 34 + "unsupported-api": 123, + "unsupported-module": 738, + "vacuous-skip": 33 } } diff --git a/packages/secure-exec/tests/node-conformance/expectations.json b/packages/secure-exec/tests/node-conformance/expectations.json index 1a31c935..0cfd05d0 100644 --- a/packages/secure-exec/tests/node-conformance/expectations.json +++ b/packages/secure-exec/tests/node-conformance/expectations.json @@ -1,7 +1,7 @@ { "nodeVersion": "22.14.0", "sourceCommit": "v22.14.0", - "lastUpdated": "2026-03-24", + "lastUpdated": "2026-03-25", "expectations": { "test-cluster-*.js": { "reason": "cluster module is Tier 5 (Unsupported) — require(cluster) throws by design", @@ -1856,16 +1856,6 @@ "category": "requires-exec-path", "expected": "fail" }, - "test-crypto-authenticated-stream.js": { - "reason": "CCM cipher mode requires authTagLength parameter — bridge does not support CCM-specific options (setAAD length, authTagLength)", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-cipheriv-decipheriv.js": { - "reason": "Cipheriv/Decipheriv constructors require 'new' keyword — calling without 'new' throws instead of returning new instance", - "category": "implementation-gap", - "expected": "fail" - }, "test-crypto-dh-constructor.js": { "reason": "DiffieHellman bridge does not handle 'buffer' encoding parameter — generateKeys/computeSecret fail", "category": "implementation-gap", @@ -1881,11 +1871,6 @@ "category": "implementation-gap", "expected": "fail" }, - "test-crypto-dh-generate-keys.js": { - "reason": "DiffieHellman.generateKeys() returns undefined instead of Buffer — bridge does not return key data", - "category": "implementation-gap", - "expected": "fail" - }, "test-crypto-dh-modp2-views.js": { "reason": "DiffieHellman.computeSecret() returns undefined instead of Buffer — bridge does not return computed secret", "category": "implementation-gap", @@ -1896,11 +1881,6 @@ "category": "implementation-gap", "expected": "fail" }, - "test-crypto-dh-padding.js": { - "reason": "DiffieHellman.computeSecret() produces incorrect result — key exchange computation has bridge-level fidelity gap", - "category": "implementation-gap", - "expected": "fail" - }, "test-crypto-dh-stateless.js": { "reason": "crypto.diffieHellman() stateless key exchange function not implemented in bridge", "category": "implementation-gap", @@ -1921,11 +1901,6 @@ "category": "unsupported-module", "expected": "fail" }, - "test-crypto-ecb.js": { - "reason": "uses Blowfish-ECB cipher which is unsupported by OpenSSL 3.x (legacy provider not enabled)", - "category": "implementation-gap", - "expected": "fail" - }, "test-crypto-ecdh-convert-key.js": { "reason": "ECDH.convertKey() error validation missing ERR_INVALID_ARG_TYPE error code on TypeError", "category": "implementation-gap", @@ -1941,198 +1916,33 @@ "category": "unsupported-module", "expected": "fail" }, - "test-crypto-key-objects-to-crypto-key.js": { - "reason": "KeyObject.toCryptoKey() method not implemented in bridge — cannot convert KeyObject to WebCrypto CryptoKey", - "category": "implementation-gap", - "expected": "fail" - }, "test-crypto-key-objects.js": { - "reason": "fs.readFileSync encoding argument handled as path component — test reads fixture PEM keys which fail to load", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-async-dsa-key-object.js": { - "reason": "DSA key generation fails — OpenSSL 'bad ffc parameters' error for DSA modulusLength/divisorLength combinations", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-async-dsa.js": { - "reason": "DSA key generation fails — OpenSSL 'bad ffc parameters' error for DSA modulusLength/divisorLength combinations", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-async-elliptic-curve-jwk-ec.js": { - "reason": "generateKeyPair with JWK encoding returns key as string instead of parsed object — bridge does not parse JWK output", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-async-elliptic-curve-jwk-rsa.js": { - "reason": "generateKeyPair with JWK encoding returns key as string instead of parsed object — bridge does not parse JWK output", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-async-elliptic-curve-jwk.js": { - "reason": "generateKeyPair with JWK encoding returns key as string instead of parsed object — bridge does not parse JWK output", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-async-encrypted-private-key-der.js": { - "reason": "generateKeyPair with encrypted DER private key encoding produces invalid output — key validation fails", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-async-encrypted-private-key.js": { - "reason": "generateKeyPair with encrypted PEM private key encoding produces invalid output — key validation fails", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-async-explicit-elliptic-curve-encrypted-p256.js": { - "reason": "generateKeyPair returns KeyObject with undefined asymmetricKeyType — assertion helper fails checking key properties", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-async-explicit-elliptic-curve-encrypted.js.js": { - "reason": "generateKeyPair returns KeyObject with undefined asymmetricKeyType — assertion helper fails checking key properties", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-async-explicit-elliptic-curve.js": { - "reason": "generateKeyPair returns KeyObject with undefined asymmetricKeyType — assertion helper fails checking key properties", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-async-named-elliptic-curve-encrypted-p256.js": { - "reason": "generateKeyPair returns KeyObject with undefined asymmetricKeyType — assertion helper fails checking key properties", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-async-named-elliptic-curve-encrypted.js": { - "reason": "generateKeyPair returns KeyObject with undefined asymmetricKeyType — assertion helper fails checking key properties", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-async-named-elliptic-curve.js": { - "reason": "generateKeyPair returns KeyObject with undefined asymmetricKeyType — assertion helper fails checking key properties", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-async-rsa.js": { - "reason": "generateKeyPair RSA key output validation fails — exported key format does not match expected PEM structure", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-bit-length.js": { - "reason": "KeyObject.asymmetricKeyDetails is undefined — bridge does not populate modulusLength, publicExponent on generated keys", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-deprecation.js": { - "reason": "KeyObject.asymmetricKeyType is undefined — bridge does not set type metadata (rsa, rsa-pss, ec, etc.) on generated keys", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-dh-classic.js": { - "reason": "KeyObject.asymmetricKeyType is undefined — bridge does not set type metadata on DH generated keys", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-duplicate-deprecated-option.js": { - "reason": "KeyObject.asymmetricKeyType is undefined — bridge does not set type metadata on generated keys", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-eddsa.js": { - "reason": "generateKeyPair callback invocation broken for ed25519/ed448 key types — callback not called correctly", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-empty-passphrase-no-prompt.js": { - "reason": "KeyObject.asymmetricKeyType is undefined — bridge does not set type metadata on generated keys", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-invalid-parameter-encoding-dsa.js": { - "reason": "generateKeyPairSync does not throw for invalid DSA parameter encoding — error validation missing", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-invalid-parameter-encoding-ec.js": { - "reason": "generateKeyPairSync does not throw for invalid EC parameter encoding — error validation missing", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-key-object-without-encoding.js": { - "reason": "KeyObject.asymmetricKeyType is undefined — bridge does not set type metadata on generated keys", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-key-objects.js": { - "reason": "KeyObject.asymmetricKeyType is undefined — bridge does not set type metadata on generated keys", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-no-rsassa-pss-params.js": { - "reason": "KeyObject.asymmetricKeyDetails is undefined — bridge does not populate modulusLength, publicExponent, hash details on generated keys", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-non-standard-public-exponent.js": { - "reason": "KeyObject.asymmetricKeyType is undefined — bridge does not set type metadata on generated keys", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-rfc8017-9-1.js": { - "reason": "KeyObject.asymmetricKeyDetails is undefined — bridge does not populate RSA-PSS key details (modulusLength, hashAlgorithm, mgf1HashAlgorithm, saltLength)", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-rfc8017-a-2-3.js": { - "reason": "KeyObject.asymmetricKeyDetails is undefined — bridge does not populate RSA-PSS key details (modulusLength, hashAlgorithm, mgf1HashAlgorithm, saltLength)", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-rsa-pss.js": { - "reason": "KeyObject.asymmetricKeyType is undefined — bridge does not set type metadata on generated keys", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen-sync.js": { - "reason": "generateKeyPairSync returns KeyObject with undefined asymmetricKeyType — assertion helper fails checking key properties", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-keygen.js": { - "reason": "generateKeyPairSync does not validate required options — missing TypeError for invalid arguments", + "reason": "fs.readFileSync() still folds the encoding argument into the VFS path for PEM fixtures, so the test tries to open '/test/fixtures/rsa_public.pem/ascii'", "category": "implementation-gap", "expected": "fail" }, "test-crypto-padding.js": { - "reason": "createCipheriv/createDecipheriv do not throw expected exceptions for invalid padding options", + "reason": "OpenSSL cipher errors still miss Node's `reason` field snapshot shape, so invalid-padding assertions on `ERR_OSSL_WRONG_FINAL_BLOCK_LENGTH` fail", "category": "implementation-gap", "expected": "fail" }, "test-crypto-pbkdf2.js": { - "reason": "pbkdf2/pbkdf2Sync error validation missing ERR_INVALID_ARG_TYPE code — TypeError thrown without .code property", - "category": "implementation-gap", + "reason": "SharedArrayBuffer is intentionally removed by sandbox hardening, so the vendored TypedArray coverage loop aborts before the remaining pbkdf2 assertions run", + "category": "security-constraint", "expected": "fail" }, "test-crypto-private-decrypt-gh32240.js": { - "reason": "publicEncrypt/privateDecrypt bridge returns undefined instead of Buffer — asymmetric encryption result not propagated", + "reason": "encrypted private-key decrypt path does not throw the expected failure, so the test hits `Missing expected exception`", "category": "implementation-gap", "expected": "fail" }, "test-crypto-psychic-signatures.js": { - "reason": "ECDSA key import fails with unsupported key format — bridge cannot decode the specific ECDSA public key encoding used in test", + "reason": "ECDSA psychic-signature fixture parsing still crashes with `TypeError: Cannot read properties of null (reading '2')`", "category": "implementation-gap", "expected": "fail" }, "test-crypto-rsa-dsa.js": { - "reason": "fs.readFileSync encoding argument handled as path component — test reads fixture PEM/cert files which fail to load", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-secret-keygen.js": { - "reason": "crypto.generateKey() function not implemented in bridge — only generateKeyPairSync/generateKeyPair are bridged", + "reason": "crypto cert fixtures are incomplete in the conformance VFS, so the test fails opening `/test/fixtures/rsa_cert.crt`", "category": "implementation-gap", "expected": "fail" }, @@ -2142,12 +1952,7 @@ "expected": "fail" }, "test-crypto-sign-verify.js": { - "reason": "fs.readFileSync encoding argument handled as path component — test reads fixture PEM/cert files which fail to load", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-stream.js": { - "reason": "crypto Hash/Cipher objects do not implement Node.js Stream interface — .pipe() method not available", + "reason": "crypto cert fixtures are incomplete in the conformance VFS, so the test fails opening `/test/fixtures/rsa_cert.crt`", "category": "implementation-gap", "expected": "fail" }, @@ -3446,7 +3251,7 @@ "expected": "fail" }, "test-webcrypto-sign-verify-eddsa.js": { - "reason": "WebCrypto subtle.importKey() not implemented — crypto.subtle API methods return undefined", + "reason": "EdDSA WebCrypto path still fails runtime assertions during sign/verify coverage (`AssertionError2: false == true`)", "category": "implementation-gap", "expected": "fail" }, @@ -3801,67 +3606,57 @@ "expected": "fail" }, "test-crypto-async-sign-verify.js": { - "reason": "uses crypto/webcrypto APIs not fully bridged in sandbox", + "reason": "crypto fixture set is incomplete in the conformance VFS, so async sign/verify fails opening `/test/fixtures/rsa_public.pem`", "category": "implementation-gap", "expected": "fail" }, "test-crypto-certificate.js": { - "reason": "tests Node.js-specific error codes (ERR_*) — sandbox polyfills throw plain errors", + "reason": "crypto certificate fixtures are incomplete in the conformance VFS, so the test fails opening `/test/fixtures/rsa_spkac.spkac`", "category": "implementation-gap", "expected": "fail" }, "test-crypto-classes.js": { - "reason": "uses crypto/webcrypto APIs not fully bridged in sandbox", + "reason": "constructor/factory parity is incomplete for crypto classes, so `createX()` instances fail `instanceof crypto.X` assertions", "category": "implementation-gap", "expected": "fail" }, "test-crypto-dh-group-setters.js": { - "reason": "uses crypto/webcrypto APIs not fully bridged in sandbox", + "reason": "ECDH instances still expose `setPrivateKey()` where Node expects `undefined`, so the group-setter surface mismatches Node", "category": "implementation-gap", "expected": "fail" }, "test-crypto-getcipherinfo.js": { - "reason": "tests Node.js-specific error codes (ERR_*) — sandbox polyfills throw plain errors", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-hash-stream-pipe.js": { - "reason": "uses crypto/webcrypto APIs not fully bridged in sandbox", + "reason": "`crypto.getCipherInfo()` is still missing from the sandbox crypto surface", "category": "implementation-gap", "expected": "fail" }, "test-crypto-hash.js": { - "reason": "tests Node.js-specific error codes (ERR_*) — sandbox polyfills throw plain errors", + "reason": "hash parity still breaks Node's identity-sensitive output assertions (`Values identical but not reference-equal`) in the vendored hash suite", "category": "implementation-gap", "expected": "fail" }, "test-crypto-hkdf.js": { - "reason": "tests Node.js-specific error codes (ERR_*) — sandbox polyfills throw plain errors", + "reason": "`crypto.hkdf()` is still missing from the sandbox crypto surface, so the test gets `hkdf is not a function`", "category": "implementation-gap", "expected": "fail" }, "test-crypto-hmac.js": { - "reason": "tests Node.js-specific error codes (ERR_*) — sandbox polyfills throw plain errors", - "category": "implementation-gap", - "expected": "fail" - }, - "test-crypto-lazy-transform-writable.js": { - "reason": "uses crypto/webcrypto APIs not fully bridged in sandbox", + "reason": "`crypto.Hmac` constructor parity is incomplete, so calling it without `new` does not return a fresh Hmac instance like Node", "category": "implementation-gap", "expected": "fail" }, "test-crypto-oneshot-hash.js": { - "reason": "tests Node.js-specific error codes (ERR_*) — sandbox polyfills throw plain errors", + "reason": "one-shot hash argument validation still misses Node's `ERR_INVALID_ARG_TYPE` error shape", "category": "implementation-gap", "expected": "fail" }, "test-crypto-randomuuid.js": { - "reason": "tests Node.js-specific error codes (ERR_*) — sandbox polyfills throw plain errors", + "reason": "`require('crypto').randomUUID` is still missing from the module overlay, so the test gets `TypeError: randomUUID is not a function`", "category": "implementation-gap", "expected": "fail" }, "test-crypto-webcrypto-aes-decrypt-tag-too-small.js": { - "reason": "crypto polyfill behavior gap", + "reason": "AES-GCM decrypt rejects undersized tags with `TypeError: Invalid authentication tag length: 0` instead of Node's `OperationError`", "category": "implementation-gap", "expected": "fail" }, @@ -4435,11 +4230,6 @@ "category": "implementation-gap", "expected": "fail" }, - "test-global-webcrypto.js": { - "reason": "uses crypto/webcrypto APIs not fully bridged in sandbox", - "category": "implementation-gap", - "expected": "fail" - }, "test-global-webstreams.js": { "reason": "require('stream/web') fails — stream/web ESM wrapper contains 'export' syntax that the CJS compilation path cannot parse (SyntaxError: Unexpected token 'export')", "category": "implementation-gap", @@ -4465,21 +4255,6 @@ "category": "implementation-gap", "expected": "fail" }, - "test-http-agent-destroyed-socket.js": { - "reason": "uses http.createServer/listen — HTTP server behavior has gaps in sandbox", - "category": "implementation-gap", - "expected": "fail" - }, - "test-http-agent-getname.js": { - "reason": "TypeError: agent.getName() is not a function — http.Agent.getName() not implemented in http polyfill", - "category": "unsupported-api", - "expected": "fail" - }, - "test-http-agent-keepalive-delay.js": { - "reason": "uses http.createServer/listen — HTTP server behavior has gaps in sandbox", - "category": "implementation-gap", - "expected": "fail" - }, "test-http-agent-keepalive.js": { "reason": "uses http.createServer/listen — HTTP server behavior has gaps in sandbox", "category": "implementation-gap", @@ -4490,21 +4265,6 @@ "category": "implementation-gap", "expected": "fail" }, - "test-http-agent-maxsockets.js": { - "reason": "uses http.createServer/listen — HTTP server behavior has gaps in sandbox", - "category": "implementation-gap", - "expected": "fail" - }, - "test-http-agent-maxtotalsockets.js": { - "reason": "needs http.createServer with real connection handling + maxTotalSockets API", - "category": "implementation-gap", - "expected": "fail" - }, - "test-http-agent.js": { - "reason": "uses http.createServer/listen — HTTP server behavior has gaps in sandbox", - "category": "implementation-gap", - "expected": "fail" - }, "test-http-allow-req-after-204-res.js": { "reason": "uses http.createServer/listen — HTTP server behavior has gaps in sandbox", "category": "implementation-gap", @@ -5909,42 +5669,32 @@ "expected": "fail" }, "test-webcrypto-constructors.js": { - "reason": "crypto.subtle (WebCrypto) API not fully implemented in sandbox", + "reason": "WebCrypto constructors still throw the wrong error shape (`TypeError` with missing `ERR_ILLEGAL_CONSTRUCTOR` metadata)", "category": "implementation-gap", "expected": "fail" }, "test-webcrypto-derivebits-hkdf.js": { - "reason": "crypto.subtle (WebCrypto) API not fully implemented in sandbox", + "reason": "HKDF deriveBits/deriveKey parity is still broken, and the vendored test aborts with `Deriving bits failed`", "category": "implementation-gap", "expected": "fail" }, "test-webcrypto-digest.js": { - "reason": "uses crypto/webcrypto APIs not fully bridged in sandbox", + "reason": "WebCrypto digest argument validation still misses Node's `ERR_INVALID_ARG_TYPE` error metadata", "category": "implementation-gap", "expected": "fail" }, "test-webcrypto-export-import-cfrg.js": { - "reason": "uses crypto/webcrypto APIs not fully bridged in sandbox", + "reason": "WebCrypto import/export fixture set is incomplete in the conformance VFS, so the test fails opening `/test/fixtures/rsa_public_2048.pem`", "category": "implementation-gap", "expected": "fail" }, "test-webcrypto-export-import-ec.js": { - "reason": "uses crypto/webcrypto APIs not fully bridged in sandbox", + "reason": "WebCrypto import/export fixture set is incomplete in the conformance VFS, so the test fails opening `/test/fixtures/rsa_public_2048.pem`", "category": "implementation-gap", "expected": "fail" }, "test-webcrypto-export-import-rsa.js": { - "reason": "uses crypto/webcrypto APIs not fully bridged in sandbox", - "category": "implementation-gap", - "expected": "fail" - }, - "test-webcrypto-getRandomValues.js": { - "reason": "globalThis.crypto.getRandomValues called without receiver does not throw ERR_INVALID_THIS in sandbox — WebCrypto polyfill does not enforce receiver binding", - "category": "implementation-gap", - "expected": "fail" - }, - "test-webcrypto-random.js": { - "reason": "sandbox crypto.getRandomValues() throws plain TypeError instead of DOMException TypeMismatchError (code 17) for invalid typed array argument types", + "reason": "WebCrypto import/export fixture set is incomplete in the conformance VFS, so the test fails opening `/test/fixtures/ec_p256_public.pem`", "category": "implementation-gap", "expected": "fail" }, @@ -6797,11 +6547,6 @@ "reason": "vacuous pass — test self-skips via common.skip() because common.hasCrypto is false", "category": "vacuous-skip" }, - "test-crypto-dh-odd-key.js": { - "expected": "fail", - "reason": "crypto.getFips is not a function — FIPS detection API not implemented", - "category": "implementation-gap" - }, "test-crypto-dh-shared.js": { "expected": "pass", "reason": "vacuous pass — test self-skips via common.skip() because common.hasCrypto is false", @@ -6828,9 +6573,9 @@ "category": "vacuous-skip" }, "test-crypto-no-algorithm.js": { - "expected": "pass", - "reason": "vacuous pass — test self-skips via common.skip() because common.hasCrypto is false", - "category": "vacuous-skip" + "expected": "fail", + "reason": "require('node:assert/strict') alias is not wired in the sandbox stdlib loader yet", + "category": "implementation-gap" }, "test-crypto-op-during-process-exit.js": { "expected": "pass", @@ -6857,6 +6602,11 @@ "reason": "vacuous pass — test self-skips via common.skip() because common.hasCrypto is false", "category": "vacuous-skip" }, + "test-crypto-worker-thread.js": { + "expected": "fail", + "reason": "requires worker_threads module which is Tier 4 (Deferred)", + "category": "unsupported-module" + }, "test-dsa-fips-invalid-key.js": { "expected": "fail", "reason": "crypto.getFips is not a function — FIPS detection API not implemented", diff --git a/packages/secure-exec/tests/project-matrix.test.ts b/packages/secure-exec/tests/project-matrix.test.ts index 9de73e52..97f10a3d 100644 --- a/packages/secure-exec/tests/project-matrix.test.ts +++ b/packages/secure-exec/tests/project-matrix.test.ts @@ -1,9 +1,6 @@ -import { execFile } from "node:child_process"; -import { createHash } from "node:crypto"; -import { access, cp, mkdir, readFile, readdir, rename, rm, writeFile } from "node:fs/promises"; +import { readFile, readdir } from "node:fs/promises"; import path from "node:path"; import { fileURLToPath } from "node:url"; -import { promisify } from "node:util"; import { describe, expect, it } from "vitest"; import { allowAllEnv, @@ -12,14 +9,23 @@ import { createDefaultNetworkAdapter, createNodeDriver, NodeFileSystem, - NodeRuntime, } from "../src/index.js"; +import { + assertPathExists, + type CapturedConsoleEvent, + formatConsoleChannel, + formatErrorOutput, + normalizeEnvelope, + parsePackageManagerFixtureMetadata, + type PreparedFixture, + prepareFixtureProject as prepareSharedFixtureProject, + type ResultEnvelope, + runHostExecution, + type PackageManagerFixtureMetadata, +} from "./project-matrix/shared.js"; import { createTestNodeRuntime } from "./test-utils.js"; -const execFileAsync = promisify(execFile); const TEST_TIMEOUT_MS = 55_000; -const COMMAND_TIMEOUT_MS = 45_000; -const CACHE_READY_MARKER = ".ready"; const TESTS_ROOT = path.dirname(fileURLToPath(import.meta.url)); const PACKAGE_ROOT = path.resolve(TESTS_ROOT, ".."); @@ -33,25 +39,7 @@ const fixturePermissions = { ...allowAllNetwork, }; -type PackageManager = "pnpm" | "npm" | "bun" | "yarn"; - -type PassFixtureMetadata = { - entry: string; - expectation: "pass"; - packageManager?: PackageManager; -}; - -type FailFixtureMetadata = { - entry: string; - expectation: "fail"; - fail: { - code: number; - stderrIncludes: string; - }; - packageManager?: PackageManager; -}; - -type FixtureMetadata = PassFixtureMetadata | FailFixtureMetadata; +type FixtureMetadata = PackageManagerFixtureMetadata; type FixtureProject = { name: string; @@ -59,40 +47,6 @@ type FixtureProject = { metadata: FixtureMetadata; }; -type PreparedFixture = { - cacheHit: boolean; - cacheKey: string; - projectDir: string; -}; - -type ResultEnvelope = { - code: number; - stdout: string; - stderr: string; -}; - -type CapturedConsoleEvent = { - channel: "stdout" | "stderr"; - message: string; -}; - -function formatConsoleChannel( - events: CapturedConsoleEvent[], - channel: CapturedConsoleEvent["channel"], -): string { - const lines = events - .filter((event) => event.channel === channel) - .map((event) => event.message); - return lines.join("\n") + (lines.length > 0 ? "\n" : ""); -} - -function formatErrorOutput(errorMessage: string | undefined): string { - if (!errorMessage) { - return ""; - } - return errorMessage.endsWith("\n") ? errorMessage : `${errorMessage}\n`; -} - const discoveredFixtures = await discoverFixtures(); describe("compatibility project matrix", () => { @@ -112,6 +66,7 @@ describe("compatibility project matrix", () => { const prepared = await prepareFixtureProject(fixture); const host = await runHostExecution(prepared.projectDir, fixture.metadata.entry); + assertHostFixtureBaseline(host); const sandbox = await runOverlaySandboxExecution( prepared.projectDir, fixture.metadata.entry, @@ -138,19 +93,19 @@ describe("compatibility project matrix", () => { secondPrepare.projectDir, fixture.metadata.entry, ); + assertHostFixtureBaseline(host); const sandbox = await runSandboxExecution( secondPrepare.projectDir, fixture.metadata.entry, ); if (fixture.metadata.expectation === "pass") { - expect(sandbox.code).toBe(host.code); + expect(sandbox.code).toBe(0); expect(sandbox.stdout).toBe(host.stdout); expect(sandbox.stderr).toBe(host.stderr); return; } - expect(host.code).toBe(0); expect(sandbox.code).toBe(fixture.metadata.fail.code); expect(sandbox.stderr).toContain(fixture.metadata.fail.stderrIncludes); }, @@ -159,6 +114,11 @@ describe("compatibility project matrix", () => { } }); +function assertHostFixtureBaseline(host: ResultEnvelope): void { + // Validate the fixture in plain Node before treating any mismatch as a sandbox bug. + expect(host.code).toBe(0); +} + async function discoverFixtures(): Promise { // Get project directories and validate metadata before running tests. const entries = await readdir(FIXTURES_ROOT, { withFileTypes: true }); @@ -173,7 +133,7 @@ async function discoverFixtures(): Promise { const metadataPath = path.join(sourceDir, "fixture.json"); const metadataText = await readFile(metadataPath, "utf8"); const parsed = JSON.parse(metadataText) as unknown; - const metadata = parseFixtureMetadata(parsed, fixtureName); + const metadata = parsePackageManagerFixtureMetadata(parsed, fixtureName); const entryPath = path.join(sourceDir, metadata.entry); await assertPathExists( entryPath, @@ -193,307 +153,14 @@ async function discoverFixtures(): Promise { return fixtures; } -function parseFixtureMetadata(raw: unknown, fixtureName: string): FixtureMetadata { - // Enforce a strict metadata schema with only pass/fail expectations. - if (!isRecord(raw)) { - throw new Error(`Fixture "${fixtureName}" metadata must be an object`); - } - if ("knownMismatch" in raw) { - throw new Error( - `Fixture "${fixtureName}" uses unsupported knownMismatch classification`, - ); - } - if ("sandboxEntry" in raw || "nodeEntry" in raw) { - throw new Error( - `Fixture "${fixtureName}" must use a single shared entry for both runtimes`, - ); - } - - const allowedTopLevelKeys = new Set(["entry", "expectation", "fail", "packageManager"]); - for (const key of Object.keys(raw)) { - if (!allowedTopLevelKeys.has(key)) { - throw new Error( - `Fixture "${fixtureName}" has unsupported metadata key "${key}"`, - ); - } - } - - if (typeof raw.entry !== "string" || raw.entry.length === 0) { - throw new Error(`Fixture "${fixtureName}" requires a non-empty entry`); - } - if (raw.expectation !== "pass" && raw.expectation !== "fail") { - throw new Error( - `Fixture "${fixtureName}" expectation must be "pass" or "fail"`, - ); - } - - // Validate optional packageManager field. - const validPackageManagers = new Set(["pnpm", "npm", "bun", "yarn"]); - if ( - raw.packageManager !== undefined && - (typeof raw.packageManager !== "string" || !validPackageManagers.has(raw.packageManager)) - ) { - throw new Error( - `Fixture "${fixtureName}" packageManager must be "pnpm", "npm", "bun", or "yarn"`, - ); - } - const packageManager = (raw.packageManager as PackageManager | undefined) ?? undefined; - - if (raw.expectation === "pass") { - return { - entry: raw.entry, - expectation: "pass", - ...(packageManager && { packageManager }), - }; - } - - if (!isRecord(raw.fail)) { - throw new Error( - `Fixture "${fixtureName}" with expectation "fail" requires a fail contract`, - ); - } - const failKeys = new Set(["code", "stderrIncludes"]); - for (const key of Object.keys(raw.fail)) { - if (!failKeys.has(key)) { - throw new Error( - `Fixture "${fixtureName}" fail contract has unsupported key "${key}"`, - ); - } - } - - if (typeof raw.fail.code !== "number") { - throw new Error( - `Fixture "${fixtureName}" fail contract requires numeric code`, - ); - } - if ( - typeof raw.fail.stderrIncludes !== "string" || - raw.fail.stderrIncludes.length === 0 - ) { - throw new Error( - `Fixture "${fixtureName}" fail contract requires stderrIncludes`, - ); - } - - return { - entry: raw.entry, - expectation: "fail", - fail: { - code: raw.fail.code, - stderrIncludes: raw.fail.stderrIncludes, - }, - ...(packageManager && { packageManager }), - }; -} - async function prepareFixtureProject(fixture: FixtureProject): Promise { - // Set up cache roots and return ready entries immediately. - await mkdir(CACHE_ROOT, { recursive: true }); - const cacheKey = await createFixtureCacheKey(fixture); - const cacheDir = path.join(CACHE_ROOT, `${fixture.name}-${cacheKey}`); - const readyMarkerPath = path.join(cacheDir, CACHE_READY_MARKER); - if (await pathExists(readyMarkerPath)) { - return { - cacheHit: true, - cacheKey, - projectDir: cacheDir, - }; - } - - // Reset stale cache entries that do not have a ready marker. - if (await pathExists(cacheDir)) { - await rm(cacheDir, { recursive: true, force: true }); - } - - // Prepare and install dependencies in a staging directory. - const stagingDir = `${cacheDir}.tmp-${process.pid}-${Date.now()}`; - await rm(stagingDir, { recursive: true, force: true }); - await cp(fixture.sourceDir, stagingDir, { - recursive: true, - filter: (source) => !isNodeModulesPath(source), + return prepareSharedFixtureProject({ + cacheRoot: CACHE_ROOT, + workspaceRoot: WORKSPACE_ROOT, + fixtureName: fixture.name, + sourceDir: fixture.sourceDir, + packageManager: fixture.metadata.packageManager, }); - const pm = fixture.metadata.packageManager ?? "pnpm"; - const installCmd = - pm === "npm" - ? { cmd: "npm", args: ["install", "--prefer-offline"] } - : pm === "bun" - ? { cmd: "bun", args: ["install"] } - : pm === "yarn" - ? await getYarnInstallCmd(stagingDir) - : { cmd: "pnpm", args: ["install", "--ignore-workspace", "--prefer-offline"] }; - await execFileAsync(installCmd.cmd, installCmd.args, { - cwd: stagingDir, - timeout: COMMAND_TIMEOUT_MS, - maxBuffer: 10 * 1024 * 1024, - ...(pm === "yarn" && { env: yarnEnv }), - }); - await writeFile( - path.join(stagingDir, CACHE_READY_MARKER), - `${new Date().toISOString()}\n`, - ); - - // Promote the staging directory after install is complete. - try { - await rename(stagingDir, cacheDir); - } catch (error) { - const code = - error && typeof error === "object" && "code" in error - ? String(error.code) - : ""; - if (code !== "EEXIST") { - throw error; - } - await rm(stagingDir, { recursive: true, force: true }); - if (!(await pathExists(readyMarkerPath))) { - throw new Error(`Cache entry race produced missing ready marker: ${cacheDir}`); - } - } - - return { - cacheHit: false, - cacheKey, - projectDir: cacheDir, - }; -} - -async function createFixtureCacheKey(fixture: FixtureProject): Promise { - // Hash fixture files and install-affecting runtime/tool factors. - const hash = createHash("sha256"); - const nodeMajor = process.versions.node.split(".")[0] ?? "0"; - const pm = fixture.metadata.packageManager ?? "pnpm"; - const pmVersion = - pm === "npm" - ? await getNpmVersion() - : pm === "bun" - ? await getBunVersion() - : pm === "yarn" - ? await getYarnVersion() - : await getPnpmVersion(); - hash.update(`node-major:${nodeMajor}\n`); - hash.update(`pm:${pm}\n`); - hash.update(`pm-version:${pmVersion}\n`); - hash.update(`platform:${process.platform}\n`); - hash.update(`arch:${process.arch}\n`); - - await hashOptionalFile( - hash, - "workspace-lock", - path.join(WORKSPACE_ROOT, "pnpm-lock.yaml"), - ); - await hashOptionalFile( - hash, - "workspace-package", - path.join(WORKSPACE_ROOT, "package.json"), - ); - await hashOptionalFile( - hash, - "fixture-package", - path.join(fixture.sourceDir, "package.json"), - ); - const lockFile = - pm === "npm" - ? "package-lock.json" - : pm === "bun" - ? "bun.lock" - : pm === "yarn" - ? "yarn.lock" - : "pnpm-lock.yaml"; - await hashOptionalFile( - hash, - "fixture-lock", - path.join(fixture.sourceDir, lockFile), - ); - - const files = await listFixtureFiles(fixture.sourceDir); - for (const relativePath of files) { - const absolutePath = path.join(fixture.sourceDir, relativePath); - const content = await readFile(absolutePath); - hash.update(`fixture-file:${toPosixPath(relativePath)}\n`); - hash.update(content); - hash.update("\n"); - } - - return hash.digest("hex").slice(0, 16); -} - -let pnpmVersionPromise: Promise | undefined; - -function getPnpmVersion(): Promise { - // Get pnpm version once so cache-key calculation stays stable. - if (!pnpmVersionPromise) { - pnpmVersionPromise = execFileAsync("pnpm", ["--version"], { - cwd: WORKSPACE_ROOT, - timeout: COMMAND_TIMEOUT_MS, - maxBuffer: 1024 * 1024, - }).then((result) => result.stdout.trim()); - } - - return pnpmVersionPromise; -} - -let npmVersionPromise: Promise | undefined; - -function getNpmVersion(): Promise { - if (!npmVersionPromise) { - npmVersionPromise = execFileAsync("npm", ["--version"], { - cwd: WORKSPACE_ROOT, - timeout: COMMAND_TIMEOUT_MS, - maxBuffer: 1024 * 1024, - }).then((result) => result.stdout.trim()); - } - - return npmVersionPromise; -} - -let bunVersionPromise: Promise | undefined; - -function getBunVersion(): Promise { - if (!bunVersionPromise) { - bunVersionPromise = execFileAsync("bun", ["--version"], { - cwd: WORKSPACE_ROOT, - timeout: COMMAND_TIMEOUT_MS, - maxBuffer: 1024 * 1024, - }).then((result) => result.stdout.trim()); - } - - return bunVersionPromise; -} - -let yarnVersionPromise: Promise | undefined; - -// Bypass corepack packageManager enforcement so yarn runs in a pnpm workspace. -const yarnEnv = { ...process.env, COREPACK_ENABLE_STRICT: "0" }; - -function getYarnVersion(): Promise { - if (!yarnVersionPromise) { - yarnVersionPromise = execFileAsync("yarn", ["--version"], { - cwd: WORKSPACE_ROOT, - timeout: COMMAND_TIMEOUT_MS, - maxBuffer: 1024 * 1024, - env: yarnEnv, - }).then((result) => result.stdout.trim()); - } - - return yarnVersionPromise; -} - -async function getYarnInstallCmd( - projectDir: string, -): Promise<{ cmd: string; args: string[] }> { - // Berry (v2+) uses .yarnrc.yml; classic (v1) does not. - const isBerry = await pathExists(path.join(projectDir, ".yarnrc.yml")); - return isBerry - ? { cmd: "yarn", args: ["install", "--immutable"] } - : { cmd: "yarn", args: ["install"] }; -} - -async function runHostExecution( - projectDir: string, - entryRelativePath: string, -): Promise { - const entryPath = path.join(projectDir, entryRelativePath); - const result = await runCommand(process.execPath, [entryPath], projectDir); - return normalizeEnvelope(result, projectDir); } async function runSandboxExecution( @@ -586,160 +253,3 @@ async function runOverlaySandboxExecution( proc.dispose(); } } - -async function runCommand( - command: string, - args: string[], - cwd: string, -): Promise { - try { - const result = await execFileAsync(command, args, { - cwd, - timeout: COMMAND_TIMEOUT_MS, - maxBuffer: 10 * 1024 * 1024, - }); - return { - code: 0, - stdout: result.stdout, - stderr: result.stderr, - }; - } catch (error: unknown) { - if (!isExecError(error)) { - throw error; - } - return { - code: typeof error.code === "number" ? error.code : 1, - stdout: typeof error.stdout === "string" ? error.stdout : "", - stderr: typeof error.stderr === "string" ? error.stderr : "", - }; - } -} - -function normalizeEnvelope( - envelope: ResultEnvelope, - projectDir: string, -): ResultEnvelope { - return { - code: envelope.code, - stdout: normalizeText(envelope.stdout, projectDir), - stderr: normalizeText(envelope.stderr, projectDir), - }; -} - -function normalizeText(value: string, projectDir: string): string { - const normalized = value.replace(/\r\n/g, "\n"); - const projectDirPosix = toPosixPath(projectDir); - const withoutPaths = normalized - .split(projectDir) - .join("") - .split(projectDirPosix) - .join(""); - return normalizeModuleNotFoundText(withoutPaths); -} - -function normalizeModuleNotFoundText(value: string): string { - if (!value.includes("Cannot find module")) { - return value; - } - const quotedMatch = value.match(/Cannot find module '([^']+)'/); - if (quotedMatch) { - return `Cannot find module '${quotedMatch[1]}'\n`; - } - const fromMatch = value.match(/Cannot find module:\s*([^\s]+)\s+from\s+/); - if (fromMatch) { - return `Cannot find module '${fromMatch[1]}'\n`; - } - return value; -} - -async function hashOptionalFile( - hash: ReturnType, - label: string, - filePath: string, -): Promise { - hash.update(`${label}:`); - try { - const content = await readFile(filePath); - hash.update(content); - } catch (error) { - if (!isNotFoundError(error)) { - throw error; - } - hash.update(""); - } - hash.update("\n"); -} - -async function listFixtureFiles(rootDir: string): Promise { - const files: string[] = []; - - async function walk(relativeDir: string): Promise { - const directory = path.join(rootDir, relativeDir); - const entries = await readdir(directory, { withFileTypes: true }); - const sortedEntries = entries - .filter((entry) => !isNodeModulesPath(entry.name)) - .sort((left, right) => left.name.localeCompare(right.name)); - - for (const entry of sortedEntries) { - const relativePath = relativeDir - ? path.join(relativeDir, entry.name) - : entry.name; - if (entry.isDirectory()) { - await walk(relativePath); - continue; - } - if (entry.isFile()) { - files.push(relativePath); - } - } - } - - await walk(""); - return files.sort((left, right) => left.localeCompare(right)); -} - -async function assertPathExists(pathname: string, message: string): Promise { - try { - await access(pathname); - } catch { - throw new Error(message); - } -} - -async function pathExists(pathname: string): Promise { - try { - await access(pathname); - return true; - } catch { - return false; - } -} - -function isNodeModulesPath(value: string): boolean { - return value.split(path.sep).includes("node_modules"); -} - -function isRecord(value: unknown): value is Record { - return Boolean(value) && typeof value === "object" && !Array.isArray(value); -} - -function isNotFoundError(value: unknown): boolean { - return ( - Boolean(value) && - typeof value === "object" && - "code" in value && - String(value.code) === "ENOENT" - ); -} - -function isExecError(value: unknown): value is { - code?: number; - stdout?: string; - stderr?: string; -} { - return Boolean(value) && typeof value === "object" && "stdout" in value; -} - -function toPosixPath(value: string): string { - return value.split(path.sep).join(path.posix.sep); -} diff --git a/packages/secure-exec/tests/project-matrix/shared.ts b/packages/secure-exec/tests/project-matrix/shared.ts new file mode 100644 index 00000000..052ac9f6 --- /dev/null +++ b/packages/secure-exec/tests/project-matrix/shared.ts @@ -0,0 +1,548 @@ +import { execFile } from "node:child_process"; +import { createHash } from "node:crypto"; +import { access, cp, mkdir, readFile, readdir, rename, rm, writeFile } from "node:fs/promises"; +import path from "node:path"; +import { promisify } from "node:util"; + +const execFileAsync = promisify(execFile); + +export const COMMAND_TIMEOUT_MS = 45_000; +export const CACHE_READY_MARKER = ".ready"; + +export type PackageManager = "pnpm" | "npm" | "bun" | "yarn"; + +export type PackageManagerPassFixtureMetadata = { + entry: string; + expectation: "pass"; + packageManager?: PackageManager; +}; + +export type PackageManagerFailFixtureMetadata = { + entry: string; + expectation: "fail"; + fail: { + code: number; + stderrIncludes: string; + }; + packageManager?: PackageManager; +}; + +export type PackageManagerFixtureMetadata = + | PackageManagerPassFixtureMetadata + | PackageManagerFailFixtureMetadata; + +export type PreparedFixture = { + cacheHit: boolean; + cacheKey: string; + projectDir: string; +}; + +export type ResultEnvelope = { + code: number; + stdout: string; + stderr: string; +}; + +export type CapturedConsoleEvent = { + channel: "stdout" | "stderr"; + message: string; +}; + +const yarnEnv = { ...process.env, COREPACK_ENABLE_STRICT: "0" }; + +export function parsePackageManagerFixtureMetadata( + raw: unknown, + fixtureName: string, +): PackageManagerFixtureMetadata { + // Enforce a strict metadata schema with only pass/fail expectations. + if (!isRecord(raw)) { + throw new Error(`Fixture "${fixtureName}" metadata must be an object`); + } + if ("knownMismatch" in raw) { + throw new Error( + `Fixture "${fixtureName}" uses unsupported knownMismatch classification`, + ); + } + if ("sandboxEntry" in raw || "nodeEntry" in raw) { + throw new Error( + `Fixture "${fixtureName}" must use a single shared entry for both runtimes`, + ); + } + + const allowedTopLevelKeys = new Set(["entry", "expectation", "fail", "packageManager"]); + for (const key of Object.keys(raw)) { + if (!allowedTopLevelKeys.has(key)) { + throw new Error( + `Fixture "${fixtureName}" has unsupported metadata key "${key}"`, + ); + } + } + + if (typeof raw.entry !== "string" || raw.entry.length === 0) { + throw new Error(`Fixture "${fixtureName}" requires a non-empty entry`); + } + if (raw.expectation !== "pass" && raw.expectation !== "fail") { + throw new Error( + `Fixture "${fixtureName}" expectation must be "pass" or "fail"`, + ); + } + + const validPackageManagers = new Set(["pnpm", "npm", "bun", "yarn"]); + if ( + raw.packageManager !== undefined && + (typeof raw.packageManager !== "string" || !validPackageManagers.has(raw.packageManager)) + ) { + throw new Error( + `Fixture "${fixtureName}" packageManager must be "pnpm", "npm", "bun", or "yarn"`, + ); + } + const packageManager = (raw.packageManager as PackageManager | undefined) ?? undefined; + + if (raw.expectation === "pass") { + return { + entry: raw.entry, + expectation: "pass", + ...(packageManager && { packageManager }), + }; + } + + if (!isRecord(raw.fail)) { + throw new Error( + `Fixture "${fixtureName}" with expectation "fail" requires a fail contract`, + ); + } + const failKeys = new Set(["code", "stderrIncludes"]); + for (const key of Object.keys(raw.fail)) { + if (!failKeys.has(key)) { + throw new Error( + `Fixture "${fixtureName}" fail contract has unsupported key "${key}"`, + ); + } + } + + if (typeof raw.fail.code !== "number") { + throw new Error( + `Fixture "${fixtureName}" fail contract requires numeric code`, + ); + } + if ( + typeof raw.fail.stderrIncludes !== "string" || + raw.fail.stderrIncludes.length === 0 + ) { + throw new Error( + `Fixture "${fixtureName}" fail contract requires stderrIncludes`, + ); + } + + return { + entry: raw.entry, + expectation: "fail", + fail: { + code: raw.fail.code, + stderrIncludes: raw.fail.stderrIncludes, + }, + ...(packageManager && { packageManager }), + }; +} + +export async function prepareFixtureProject(options: { + cacheRoot: string; + workspaceRoot: string; + fixtureName: string; + sourceDir: string; + packageManager?: PackageManager; +}): Promise { + const { + cacheRoot, + workspaceRoot, + fixtureName, + sourceDir, + packageManager = "pnpm", + } = options; + + await mkdir(cacheRoot, { recursive: true }); + const cacheKey = await createFixtureCacheKey({ + workspaceRoot, + sourceDir, + packageManager, + }); + const cacheDir = path.join(cacheRoot, `${fixtureName}-${cacheKey}`); + const readyMarkerPath = path.join(cacheDir, CACHE_READY_MARKER); + if (await pathExists(readyMarkerPath)) { + return { + cacheHit: true, + cacheKey, + projectDir: cacheDir, + }; + } + + if (await pathExists(cacheDir)) { + await rm(cacheDir, { recursive: true, force: true }); + } + + const stagingDir = `${cacheDir}.tmp-${process.pid}-${Date.now()}`; + await rm(stagingDir, { recursive: true, force: true }); + await cp(sourceDir, stagingDir, { + recursive: true, + filter: (source) => !isNodeModulesPath(source), + }); + + const installCmd = + packageManager === "npm" + ? { cmd: "npm", args: ["install", "--prefer-offline"] } + : packageManager === "bun" + ? { cmd: "bun", args: ["install"] } + : packageManager === "yarn" + ? await getYarnInstallCmd(stagingDir) + : { cmd: "pnpm", args: ["install", "--ignore-workspace", "--prefer-offline"] }; + await execFileAsync(installCmd.cmd, installCmd.args, { + cwd: stagingDir, + timeout: COMMAND_TIMEOUT_MS, + maxBuffer: 10 * 1024 * 1024, + ...(packageManager === "yarn" && { env: yarnEnv }), + }); + await writeFile( + path.join(stagingDir, CACHE_READY_MARKER), + `${new Date().toISOString()}\n`, + ); + + try { + await rename(stagingDir, cacheDir); + } catch (error) { + const code = + error && typeof error === "object" && "code" in error + ? String(error.code) + : ""; + if (code !== "EEXIST") { + throw error; + } + await rm(stagingDir, { recursive: true, force: true }); + if (!(await pathExists(readyMarkerPath))) { + throw new Error(`Cache entry race produced missing ready marker: ${cacheDir}`); + } + } + + return { + cacheHit: false, + cacheKey, + projectDir: cacheDir, + }; +} + +export async function runHostExecution( + projectDir: string, + entryRelativePath: string, + extraEnv: Record = {}, +): Promise { + const entryPath = path.join(projectDir, entryRelativePath); + const result = await runCommand(process.execPath, [entryPath], projectDir, extraEnv); + return normalizeEnvelope(result, projectDir); +} + +export async function runCommand( + command: string, + args: string[], + cwd: string, + extraEnv: Record = {}, +): Promise { + try { + const result = await execFileAsync(command, args, { + cwd, + timeout: COMMAND_TIMEOUT_MS, + maxBuffer: 10 * 1024 * 1024, + env: { ...process.env, ...extraEnv }, + }); + return { + code: 0, + stdout: result.stdout, + stderr: result.stderr, + }; + } catch (error: unknown) { + if (!isExecError(error)) { + throw error; + } + return { + code: typeof error.code === "number" ? error.code : 1, + stdout: typeof error.stdout === "string" ? error.stdout : "", + stderr: typeof error.stderr === "string" ? error.stderr : "", + }; + } +} + +export function formatConsoleChannel( + events: CapturedConsoleEvent[], + channel: CapturedConsoleEvent["channel"], +): string { + const lines = events + .filter((event) => event.channel === channel) + .map((event) => event.message); + return lines.join("\n") + (lines.length > 0 ? "\n" : ""); +} + +export function formatErrorOutput(errorMessage: string | undefined): string { + if (!errorMessage) { + return ""; + } + return errorMessage.endsWith("\n") ? errorMessage : `${errorMessage}\n`; +} + +export function normalizeEnvelope( + envelope: ResultEnvelope, + projectDir: string, +): ResultEnvelope { + return { + code: envelope.code, + stdout: normalizeText(envelope.stdout, projectDir), + stderr: normalizeText(envelope.stderr, projectDir), + }; +} + +export function normalizeText(value: string, projectDir: string): string { + const normalized = value.replace(/\r\n/g, "\n"); + const projectDirPosix = toPosixPath(projectDir); + const withoutPaths = normalized + .split(projectDir) + .join("") + .split(projectDirPosix) + .join(""); + return normalizeModuleNotFoundText(withoutPaths); +} + +export function normalizeModuleNotFoundText(value: string): string { + if (!value.includes("Cannot find module")) { + return value; + } + const quotedMatch = value.match(/Cannot find module '([^']+)'/); + if (quotedMatch) { + return `Cannot find module '${quotedMatch[1]}'\n`; + } + const fromMatch = value.match(/Cannot find module:\s*([^\s]+)\s+from\s+/); + if (fromMatch) { + return `Cannot find module '${fromMatch[1]}'\n`; + } + return value; +} + +export async function assertPathExists( + pathname: string, + message: string, +): Promise { + try { + await access(pathname); + } catch { + throw new Error(message); + } +} + +export async function pathExists(pathname: string): Promise { + try { + await access(pathname); + return true; + } catch { + return false; + } +} + +export function isRecord(value: unknown): value is Record { + return Boolean(value) && typeof value === "object" && !Array.isArray(value); +} + +export function isNodeModulesPath(value: string): boolean { + return value.split(path.sep).includes("node_modules"); +} + +export function isNotFoundError(value: unknown): boolean { + return ( + Boolean(value) && + typeof value === "object" && + "code" in value && + String(value.code) === "ENOENT" + ); +} + +export function isExecError(value: unknown): value is { + code?: number; + stdout?: string; + stderr?: string; +} { + return Boolean(value) && typeof value === "object" && "stdout" in value; +} + +export function toPosixPath(value: string): string { + return value.split(path.sep).join(path.posix.sep); +} + +async function createFixtureCacheKey(options: { + workspaceRoot: string; + sourceDir: string; + packageManager: PackageManager; +}): Promise { + const { workspaceRoot, sourceDir, packageManager } = options; + const hash = createHash("sha256"); + const nodeMajor = process.versions.node.split(".")[0] ?? "0"; + const pmVersion = + packageManager === "npm" + ? await getNpmVersion(workspaceRoot) + : packageManager === "bun" + ? await getBunVersion(workspaceRoot) + : packageManager === "yarn" + ? await getYarnVersion(workspaceRoot) + : await getPnpmVersion(workspaceRoot); + hash.update(`node-major:${nodeMajor}\n`); + hash.update(`pm:${packageManager}\n`); + hash.update(`pm-version:${pmVersion}\n`); + hash.update(`platform:${process.platform}\n`); + hash.update(`arch:${process.arch}\n`); + + await hashOptionalFile( + hash, + "workspace-lock", + path.join(workspaceRoot, "pnpm-lock.yaml"), + ); + await hashOptionalFile( + hash, + "workspace-package", + path.join(workspaceRoot, "package.json"), + ); + await hashOptionalFile( + hash, + "fixture-package", + path.join(sourceDir, "package.json"), + ); + const lockFile = + packageManager === "npm" + ? "package-lock.json" + : packageManager === "bun" + ? "bun.lock" + : packageManager === "yarn" + ? "yarn.lock" + : "pnpm-lock.yaml"; + await hashOptionalFile( + hash, + "fixture-lock", + path.join(sourceDir, lockFile), + ); + + const files = await listFixtureFiles(sourceDir); + for (const relativePath of files) { + const absolutePath = path.join(sourceDir, relativePath); + const content = await readFile(absolutePath); + hash.update(`fixture-file:${toPosixPath(relativePath)}\n`); + hash.update(content); + hash.update("\n"); + } + + return hash.digest("hex").slice(0, 16); +} + +async function hashOptionalFile( + hash: ReturnType, + label: string, + filePath: string, +): Promise { + hash.update(`${label}:`); + try { + const content = await readFile(filePath); + hash.update(content); + } catch (error) { + if (!isNotFoundError(error)) { + throw error; + } + hash.update(""); + } + hash.update("\n"); +} + +async function listFixtureFiles(rootDir: string): Promise { + const files: string[] = []; + + async function walk(relativeDir: string): Promise { + const directory = path.join(rootDir, relativeDir); + const entries = await readdir(directory, { withFileTypes: true }); + const sortedEntries = entries + .filter((entry) => !isNodeModulesPath(entry.name)) + .sort((left, right) => left.name.localeCompare(right.name)); + + for (const entry of sortedEntries) { + const relativePath = relativeDir + ? path.join(relativeDir, entry.name) + : entry.name; + if (entry.isDirectory()) { + await walk(relativePath); + continue; + } + if (entry.isFile()) { + files.push(relativePath); + } + } + } + + await walk(""); + return files.sort((left, right) => left.localeCompare(right)); +} + +let pnpmVersionPromise: Promise | undefined; + +function getPnpmVersion(workspaceRoot: string): Promise { + if (!pnpmVersionPromise) { + pnpmVersionPromise = execFileAsync("pnpm", ["--version"], { + cwd: workspaceRoot, + timeout: COMMAND_TIMEOUT_MS, + maxBuffer: 1024 * 1024, + }).then((result) => result.stdout.trim()); + } + + return pnpmVersionPromise; +} + +let npmVersionPromise: Promise | undefined; + +function getNpmVersion(workspaceRoot: string): Promise { + if (!npmVersionPromise) { + npmVersionPromise = execFileAsync("npm", ["--version"], { + cwd: workspaceRoot, + timeout: COMMAND_TIMEOUT_MS, + maxBuffer: 1024 * 1024, + }).then((result) => result.stdout.trim()); + } + + return npmVersionPromise; +} + +let bunVersionPromise: Promise | undefined; + +function getBunVersion(workspaceRoot: string): Promise { + if (!bunVersionPromise) { + bunVersionPromise = execFileAsync("bun", ["--version"], { + cwd: workspaceRoot, + timeout: COMMAND_TIMEOUT_MS, + maxBuffer: 1024 * 1024, + }).then((result) => result.stdout.trim()); + } + + return bunVersionPromise; +} + +let yarnVersionPromise: Promise | undefined; + +function getYarnVersion(workspaceRoot: string): Promise { + if (!yarnVersionPromise) { + yarnVersionPromise = execFileAsync("yarn", ["--version"], { + cwd: workspaceRoot, + timeout: COMMAND_TIMEOUT_MS, + maxBuffer: 1024 * 1024, + env: yarnEnv, + }).then((result) => result.stdout.trim()); + } + + return yarnVersionPromise; +} + +async function getYarnInstallCmd( + projectDir: string, +): Promise<{ cmd: string; args: string[] }> { + const isBerry = await pathExists(path.join(projectDir, ".yarnrc.yml")); + return isBerry + ? { cmd: "yarn", args: ["install", "--immutable"] } + : { cmd: "yarn", args: ["install"] }; +} diff --git a/packages/secure-exec/tests/projects/bun-layout-pass/bun.lock b/packages/secure-exec/tests/projects/bun-package-manager-layout-pass/bun.lock similarity index 83% rename from packages/secure-exec/tests/projects/bun-layout-pass/bun.lock rename to packages/secure-exec/tests/projects/bun-package-manager-layout-pass/bun.lock index 230026f9..138c529b 100644 --- a/packages/secure-exec/tests/projects/bun-layout-pass/bun.lock +++ b/packages/secure-exec/tests/projects/bun-package-manager-layout-pass/bun.lock @@ -3,7 +3,7 @@ "configVersion": 1, "workspaces": { "": { - "name": "project-matrix-bun-layout-pass", + "name": "project-matrix-bun-package-manager-layout-pass", "dependencies": { "left-pad": "0.0.3", }, diff --git a/packages/secure-exec/tests/projects/bun-layout-pass/fixture.json b/packages/secure-exec/tests/projects/bun-package-manager-layout-pass/fixture.json similarity index 100% rename from packages/secure-exec/tests/projects/bun-layout-pass/fixture.json rename to packages/secure-exec/tests/projects/bun-package-manager-layout-pass/fixture.json diff --git a/packages/secure-exec/tests/projects/bun-layout-pass/package.json b/packages/secure-exec/tests/projects/bun-package-manager-layout-pass/package.json similarity index 59% rename from packages/secure-exec/tests/projects/bun-layout-pass/package.json rename to packages/secure-exec/tests/projects/bun-package-manager-layout-pass/package.json index 60d39f72..8f832f8b 100644 --- a/packages/secure-exec/tests/projects/bun-layout-pass/package.json +++ b/packages/secure-exec/tests/projects/bun-package-manager-layout-pass/package.json @@ -1,5 +1,5 @@ { - "name": "project-matrix-bun-layout-pass", + "name": "project-matrix-bun-package-manager-layout-pass", "private": true, "type": "commonjs", "dependencies": { diff --git a/packages/secure-exec/tests/projects/bun-layout-pass/src/index.js b/packages/secure-exec/tests/projects/bun-package-manager-layout-pass/src/index.js similarity index 100% rename from packages/secure-exec/tests/projects/bun-layout-pass/src/index.js rename to packages/secure-exec/tests/projects/bun-package-manager-layout-pass/src/index.js diff --git a/packages/secure-exec/tests/runtime-driver/node/index.test.ts b/packages/secure-exec/tests/runtime-driver/node/index.test.ts index 050737cc..b6cbd5e9 100644 --- a/packages/secure-exec/tests/runtime-driver/node/index.test.ts +++ b/packages/secure-exec/tests/runtime-driver/node/index.test.ts @@ -1899,6 +1899,307 @@ describe("NodeRuntime", () => { expect(stdout).toContain("MAX:1"); }); + it("http.Agent exposes Node-compatible naming and _http_agent aliasing", async () => { + const driver = createNodeDriver({ + filesystem: new NodeFileSystem(), + networkAdapter: createDefaultNetworkAdapter(), + permissions: allowFsNetworkEnv, + }); + const capture = createConsoleCapture(); + proc = createTestNodeRuntime({ + driver, + processConfig: { cwd: "/" }, + onStdio: capture.onStdio, + }); + + const result = await proc.exec( + ` + (() => { + const assert = require('node:assert'); + const http = require('http'); + const httpAgent = require('_http_agent'); + + assert.strictEqual(httpAgent.Agent, http.Agent); + assert.strictEqual(httpAgent.globalAgent, http.globalAgent); + + const agent = new http.Agent({ maxSockets: 2, maxTotalSockets: 3 }); + assert.strictEqual(agent.getName(), 'localhost::'); + assert.strictEqual(agent.getName({ port: 80, localAddress: '192.168.1.1' }), 'localhost:80:192.168.1.1'); + assert.strictEqual(agent.getName({ socketPath: '/tmp/test.sock' }), 'localhost:::/tmp/test.sock'); + assert.strictEqual(agent.getName({ family: 6 }), 'localhost:::6'); + assert.throws(() => new http.Agent({ maxTotalSockets: 'bad' }), (err) => err && err.code === 'ERR_INVALID_ARG_TYPE'); + assert.throws(() => new http.Agent({ maxTotalSockets: 0 }), (err) => err && err.code === 'ERR_OUT_OF_RANGE'); + console.log('AGENT_OK'); + })(); + `, + ); + + expect(result.code).toBe(0); + expect(capture.stdout()).toContain("AGENT_OK"); + }); + + it("http.Agent does not reuse a destroyed keepalive socket for queued requests", async () => { + const driver = createNodeDriver({ + filesystem: new NodeFileSystem(), + networkAdapter: createDefaultNetworkAdapter(), + permissions: allowFsNetworkEnv, + }); + const capture = createConsoleCapture(); + proc = createTestNodeRuntime({ + driver, + processConfig: { cwd: "/" }, + onStdio: capture.onStdio, + }); + + const result = await proc.exec( + ` + (async () => { + const assert = require('node:assert'); + const http = require('http'); + + const server = http.createServer((_req, res) => { + res.end('ok'); + }); + + await new Promise((resolve) => server.listen(0, '127.0.0.1', resolve)); + const agent = new http.Agent({ keepAlive: true, maxSockets: 1 }); + const options = { + host: '127.0.0.1', + port: server.address().port, + path: '/', + agent, + }; + + const req1 = http.get(options, (res) => { + res.resume(); + res.on('end', () => { + req1.socket.destroy(); + }); + }); + + const req2 = http.get(options, (res) => { + res.resume(); + res.on('end', async () => { + assert.notStrictEqual(req1.socket, req2.socket); + assert.strictEqual(req2.reusedSocket, false); + await new Promise((resolve, reject) => server.close((err) => err ? reject(err) : resolve())); + agent.destroy(); + console.log('DESTROY_OK'); + }); + }); + + await new Promise((resolve, reject) => { + req1.on('error', reject); + req2.on('error', reject); + req1.on('socket', (socket) => { + socket.once('close', resolve); + }); + }); + })(); + `, + ); + + expect(result.code).toBe(0); + expect(capture.stdout()).toContain("DESTROY_OK"); + }); + + it("http.Agent keeps aborted sockets visible during the response turn", async () => { + const driver = createNodeDriver({ + filesystem: new NodeFileSystem(), + networkAdapter: createDefaultNetworkAdapter(), + permissions: allowFsNetworkEnv, + }); + const capture = createConsoleCapture(); + proc = createTestNodeRuntime({ + driver, + processConfig: { cwd: "/" }, + onStdio: capture.onStdio, + }); + + const result = await proc.exec( + ` + (async () => { + const assert = require('node:assert'); + const http = require('http'); + + const agent = new http.Agent({ + keepAlive: true, + keepAliveMsecs: 1000, + maxSockets: 2, + maxFreeSockets: 2, + }); + + const server = http.createServer((_req, res) => { + res.end('hello world'); + }); + + await new Promise((resolve) => server.listen(0, '127.0.0.1', resolve)); + + await new Promise((resolve, reject) => { + let responses = 0; + for (let i = 0; i < 6; i += 1) { + const req = http.get({ + host: 'localhost', + port: server.address().port, + agent, + path: '/', + }, () => {}); + + req.on('response', () => { + req.abort(); + const key = Object.keys(agent.sockets)[0]; + const sockets = key ? agent.sockets[key] : undefined; + assert.ok(sockets); + assert.ok(sockets.length <= 2); + responses += 1; + if (responses === 6) { + server.close((err) => { + if (err) reject(err); + else resolve(undefined); + }); + } + }); + + req.on('error', reject); + } + }); + + agent.destroy(); + console.log('ABORT_BOOKKEEPING_OK'); + })(); + `, + ); + + expect(result.code).toBe(0); + expect(capture.stdout()).toContain("ABORT_BOOKKEEPING_OK"); + }); + + it("http fake sockets remove once listeners via the original callback", async () => { + const driver = createNodeDriver({ + filesystem: new NodeFileSystem(), + networkAdapter: createDefaultNetworkAdapter(), + permissions: allowFsNetworkEnv, + }); + const capture = createConsoleCapture(); + proc = createTestNodeRuntime({ + driver, + processConfig: { cwd: "/" }, + onStdio: capture.onStdio, + }); + + const result = await proc.exec( + ` + (async () => { + const http = require('http'); + + const server = http.createServer((_req, res) => { + res.end('ok'); + }); + + await new Promise((resolve) => server.listen(0, '127.0.0.1', resolve)); + const agent = new http.Agent({ keepAlive: true }); + + await new Promise((resolve, reject) => { + const req = http.get({ + host: 'localhost', + port: server.address().port, + path: '/', + agent, + }, (res) => { + res.resume(); + res.on('end', async () => { + const onClose = () => { + throw new Error('close listener should have been removed'); + }; + req.socket.once('close', onClose); + req.socket.off('close', onClose); + req.socket.destroy(); + await new Promise((resolve) => setTimeout(resolve, 0)); + await new Promise((resolve, reject) => server.close((err) => err ? reject(err) : resolve())); + agent.destroy(); + console.log('SOCKET_ONCE_OFF_OK'); + }); + }); + + req.on('error', reject); + req.on('close', resolve); + }); + })(); + `, + ); + + expect(result.code).toBe(0); + expect(capture.stdout()).toContain("SOCKET_ONCE_OFF_OK"); + }); + + it("http.Agent evicts a kept-alive socket after the server closes it on the next turn", async () => { + const driver = createNodeDriver({ + filesystem: new NodeFileSystem(), + networkAdapter: createDefaultNetworkAdapter(), + permissions: allowFsNetworkEnv, + }); + const capture = createConsoleCapture(); + proc = createTestNodeRuntime({ + driver, + processConfig: { cwd: "/" }, + onStdio: capture.onStdio, + }); + + const result = await proc.exec( + ` + (async () => { + const assert = require('node:assert'); + const http = require('http'); + + const agent = new http.Agent({ + keepAlive: true, + keepAliveMsecs: 1000, + maxSockets: 1, + maxFreeSockets: 1, + }); + + const server = http.createServer((_req, res) => { + const socket = res.connection; + setImmediate(() => socket.end()); + res.end('hello world'); + }); + + await new Promise((resolve) => server.listen(0, '127.0.0.1', resolve)); + const name = 'localhost:' + server.address().port + ':'; + + await new Promise((resolve, reject) => { + const req = http.get({ + host: 'localhost', + port: server.address().port, + path: '/', + agent, + }, (res) => { + res.resume(); + res.on('end', () => { + process.nextTick(() => { + assert.strictEqual(agent.freeSockets[name].length, 1); + setTimeout(async () => { + assert.strictEqual(agent.freeSockets[name], undefined); + assert.strictEqual(agent.totalSocketCount, 0); + await new Promise((resolve, reject) => server.close((err) => err ? reject(err) : resolve())); + agent.destroy(); + console.log('REMOTE_CLOSE_EVICT_OK'); + resolve(undefined); + }, 200); + }); + }); + }); + + req.on('error', reject); + }); + })(); + `, + ); + + expect(result.code).toBe(0); + expect(capture.stdout()).toContain("REMOTE_CLOSE_EVICT_OK"); + }); + // HTTP upgrade — 101 response fires upgrade event it("upgrade request fires upgrade event with response and socket", async () => { // Upgrade requires raw socket handling — use external server with SSRF exemption diff --git a/packages/secure-exec/tests/runtime-driver/node/standalone-dist-smoke.test.ts b/packages/secure-exec/tests/runtime-driver/node/standalone-dist-smoke.test.ts new file mode 100644 index 00000000..bb672e04 --- /dev/null +++ b/packages/secure-exec/tests/runtime-driver/node/standalone-dist-smoke.test.ts @@ -0,0 +1,165 @@ +import { execFile } from "node:child_process"; +import { dirname, resolve } from "node:path"; +import { fileURLToPath, pathToFileURL } from "node:url"; +import { promisify } from "node:util"; +import { describe, expect, it } from "vitest"; + +const execFileAsync = promisify(execFile); +const __dirname = dirname(fileURLToPath(import.meta.url)); +const WORKSPACE_ROOT = resolve(__dirname, "../../../../.."); +const DIST_INDEX_URL = pathToFileURL( + resolve(WORKSPACE_ROOT, "packages/secure-exec/dist/index.js"), +).href; + +async function runStandaloneScript(source: string): Promise { + const { stdout, stderr } = await execFileAsync( + "node", + ["--input-type=module", "-e", source], + { + cwd: WORKSPACE_ROOT, + timeout: 30_000, + }, + ); + + expect(stderr).toBe(""); + return stdout.trim(); +} + +describe("standalone dist bootstrap", () => { + it("supports runtime.exec and kernel.spawn outside vitest transforms", async () => { + const stdout = await runStandaloneScript(` + import { + NodeRuntime, + createInMemoryFileSystem, + createKernel, + createNodeDriver, + createNodeRuntime, + createNodeRuntimeDriverFactory, + } from ${JSON.stringify(DIST_INDEX_URL)}; + + const stdio = []; + const runtime = new NodeRuntime({ + onStdio: (event) => { + if (event.channel === "stdout") { + stdio.push(event.message); + } + }, + systemDriver: createNodeDriver(), + runtimeDriverFactory: createNodeRuntimeDriverFactory(), + }); + + const execResult = await runtime.exec( + 'console.log("hello"); const fs = require("node:fs"); console.log(typeof fs.readFileSync);', + ); + + const kernel = createKernel({ filesystem: createInMemoryFileSystem() }); + await kernel.mount(createNodeRuntime()); + + const kernelStdout = []; + const proc = kernel.spawn("node", ["-e", 'console.log(1)'], { + onStdout: (chunk) => kernelStdout.push(new TextDecoder().decode(chunk)), + }); + const kernelCode = await proc.wait(); + + await runtime.terminate(); + await kernel.dispose(); + + const result = JSON.stringify({ + execCode: execResult.code, + execErrorMessage: execResult.errorMessage, + stdio, + kernelCode, + kernelStdout: kernelStdout.join(""), + }); + + await new Promise((resolve, reject) => { + process.stdout.write(result, (error) => { + if (error) { + reject(error); + return; + } + resolve(); + }); + }); + process.exit(0); + `); + + const result = JSON.parse(stdout) as { + execCode: number; + execErrorMessage?: string; + stdio: string[]; + kernelCode: number; + kernelStdout: string; + }; + + expect(result.execCode).toBe(0); + expect(result.execErrorMessage).toBeUndefined(); + expect(result.stdio.join("")).toContain("hello"); + expect(result.stdio.join("")).toContain("function"); + expect(result.kernelCode).toBe(0); + expect(result.kernelStdout).toContain("1"); + }, 30_000); + + it("supports runtime.run exports outside vitest transforms", async () => { + const stdout = await runStandaloneScript(` + import { + NodeRuntime, + createNodeDriver, + createNodeRuntimeDriverFactory, + } from ${JSON.stringify(DIST_INDEX_URL)}; + + const runtime = new NodeRuntime({ + systemDriver: createNodeDriver(), + runtimeDriverFactory: createNodeRuntimeDriverFactory(), + }); + + const cjsObject = await runtime.run('module.exports = { message: "hello" };'); + const cjsScalar = await runtime.run("module.exports = 42;"); + const cjsNested = await runtime.run("module.exports = { a: 1, b: [2, 3] };"); + const esm = await runtime.run("export const answer = 42;", "/entry.mjs"); + + const result = JSON.stringify({ + cjsObject, + cjsScalar, + cjsNested, + esm, + }); + + await runtime.terminate(); + await new Promise((resolve, reject) => { + process.stdout.write(result, (error) => { + if (error) { + reject(error); + return; + } + resolve(); + }); + }); + process.exit(0); + `); + + const result = JSON.parse(stdout) as { + cjsObject: { code: number; exports: { message: string } }; + cjsScalar: { code: number; exports: number }; + cjsNested: { code: number; exports: { a: number; b: number[] } }; + esm: { code: number; exports: { answer: number } }; + }; + + expect(result.cjsObject).toEqual({ + code: 0, + exports: { message: "hello" }, + }); + expect(result.cjsScalar).toEqual({ + code: 0, + exports: 42, + }); + expect(result.cjsNested).toEqual({ + code: 0, + exports: { a: 1, b: [2, 3] }, + }); + expect(result.esm).toEqual({ + code: 0, + exports: { answer: 42 }, + }); + }, 30_000); +}); diff --git a/packages/secure-exec/tests/test-suite/node/crypto.ts b/packages/secure-exec/tests/test-suite/node/crypto.ts index 56827ecd..5079a361 100644 --- a/packages/secure-exec/tests/test-suite/node/crypto.ts +++ b/packages/secure-exec/tests/test-suite/node/crypto.ts @@ -1,3 +1,4 @@ +import { checkPrimeSync } from "node:crypto"; import { afterEach, expect, it } from "vitest"; import type { NodeSuiteContext } from "./runtime.js"; @@ -185,6 +186,39 @@ export function runNodeCryptoSuite(context: NodeSuiteContext): void { expect(exports.endType).toBe("function"); }); + it("Hash is a Transform stream and supports pipe() output", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + import crypto from 'node:crypto'; + import stream from 'node:stream'; + + export default await new Promise((resolve, reject) => { + const src = new stream.PassThrough(); + const hash = crypto.Hash('sha256'); + const chunks = []; + hash.setEncoding('hex'); + hash.on('data', (chunk) => chunks.push(chunk)); + hash.on('error', reject); + hash.on('finish', () => { + resolve({ + isTransform: hash instanceof stream.Transform, + digest: chunks.join(''), + cachedDigest: hash.digest('hex'), + }); + }); + src.pipe(hash); + src.end('hello'); + }); + `, "/entry.mjs"); + expect(result.code).toBe(0); + expect(result.errorMessage).toBeUndefined(); + expect((result.exports as any).default).toEqual({ + isTransform: true, + digest: "2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824", + cachedDigest: "2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824", + }); + }); + it("createHash handles binary Buffer input", async () => { const runtime = await context.createRuntime(); const result = await runtime.run(` @@ -386,19 +420,20 @@ export function runNodeCryptoSuite(context: NodeSuiteContext): void { it("pbkdf2 async variant calls callback with derived key", async () => { const runtime = await context.createRuntime(); const result = await runtime.run(` - const crypto = require('crypto'); - let cbResult; - crypto.pbkdf2('password', 'salt', 1, 32, 'sha256', (err, derived) => { - cbResult = { - err: err, - hex: derived.toString('hex'), - isBuffer: Buffer.isBuffer(derived), - }; + import crypto from 'node:crypto'; + + export default await new Promise((resolve) => { + crypto.pbkdf2('password', 'salt', 1, 32, 'sha256', (err, derived) => { + resolve({ + err: err, + hex: derived.toString('hex'), + isBuffer: Buffer.isBuffer(derived), + }); + }); }); - module.exports = cbResult; - `); + `, "/entry.mjs"); expect(result.code).toBe(0); - const exports = result.exports as any; + const exports = (result.exports as any).default; expect(exports.err).toBeNull(); expect(exports.hex).toBe("120fb6cffcf8b32c43e7225256c4f837a86548c92ccc35480805987cb70be17b"); expect(exports.isBuffer).toBe(true); @@ -679,6 +714,75 @@ export function runNodeCryptoSuite(context: NodeSuiteContext): void { expect((result.exports as any).encryptedLength).toBeGreaterThan(0); }); + it("Cipheriv and Decipheriv are Transform streams", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + import crypto from 'node:crypto'; + import stream from 'node:stream'; + + const key = Buffer.alloc(24, 1); + const iv = Buffer.alloc(8, 2); + export default await new Promise((resolve, reject) => { + const src = new stream.PassThrough(); + const cipher = crypto.Cipheriv('des-ede3-cbc', key, iv); + const decipher = crypto.Decipheriv('des-ede3-cbc', key, iv); + const encrypted = []; + const decrypted = []; + cipher.on('data', (chunk) => encrypted.push(chunk)); + cipher.on('error', reject); + decipher.on('data', (chunk) => decrypted.push(chunk)); + decipher.on('error', reject); + decipher.on('finish', () => { + resolve({ + cipherTransform: cipher instanceof stream.Transform, + decipherTransform: decipher instanceof stream.Transform, + encryptedLength: Buffer.concat(encrypted).length, + roundTrip: Buffer.concat(decrypted).toString('utf8'), + }); + }); + src.pipe(cipher).pipe(decipher); + src.end('stream me through crypto'); + }); + `, "/entry.mjs"); + expect(result.code).toBe(0); + expect(result.errorMessage).toBeUndefined(); + expect((result.exports as any).default).toEqual({ + cipherTransform: true, + decipherTransform: true, + encryptedLength: 32, + roundTrip: "stream me through crypto", + }); + }); + + it("createCipheriv supports CCM authTagLength options", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + const crypto = require('crypto'); + const key = crypto.randomBytes(24); + const nonce = crypto.randomBytes(12); + const aad = Buffer.from('secure-exec'); + const plaintext = Buffer.from('ccm payload'); + const cipher = crypto.createCipheriv('aes-192-ccm', key, nonce, { authTagLength: 16 }); + cipher.setAAD(aad, { plaintextLength: plaintext.length }); + const encrypted = Buffer.concat([cipher.update(plaintext), cipher.final()]); + const tag = cipher.getAuthTag(); + const decipher = crypto.createDecipheriv('aes-192-ccm', key, nonce, { authTagLength: 16 }); + decipher.setAuthTag(tag); + decipher.setAAD(aad, { plaintextLength: plaintext.length }); + const decrypted = Buffer.concat([decipher.update(encrypted), decipher.final()]); + module.exports = { + tagLength: tag.length, + plaintext: decrypted.toString('utf8'), + }; + `); + expect(result.code).toBe(0); + expect(result.errorMessage).toBeUndefined(); + expect(result.exports).toEqual({ + tagLength: 16, + plaintext: "ccm payload", + }); + }); + it("randomBytes rejects negative size", async () => { const runtime = await context.createRuntime(); const result = await runtime.run(` @@ -778,25 +882,263 @@ export function runNodeCryptoSuite(context: NodeSuiteContext): void { it("generateKeyPair async variant calls callback", async () => { const runtime = await context.createRuntime(); const result = await runtime.run(` - const crypto = require('crypto'); - let cbResult; - crypto.generateKeyPair('ec', { namedCurve: 'prime256v1' }, (err, pub, priv) => { - cbResult = { - err: err, - pubType: pub.type, - privType: priv.type, - }; + import crypto from 'node:crypto'; + + export default await new Promise((resolve) => { + crypto.generateKeyPair('ec', { namedCurve: 'prime256v1' }, (err, pub, priv) => { + resolve({ + err: err, + pubType: pub.type, + privType: priv.type, + }); + }); }); - module.exports = cbResult; - `); + `, "/entry.mjs"); expect(result.code).toBe(0); expect(result.errorMessage).toBeUndefined(); - const exports = result.exports as any; + const exports = (result.exports as any).default; expect(exports.err).toBeNull(); expect(exports.pubType).toBe("public"); expect(exports.privType).toBe("private"); }); + it("generateKeyPair async supports omitted options for ed25519", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + import crypto from 'node:crypto'; + + export default await new Promise((resolve) => { + crypto.generateKeyPair('ed25519', (err, pub, priv) => { + resolve({ + err: err ? { name: err.name, code: err.code, message: err.message } : null, + pubType: pub && pub.type, + pubKeyType: pub && pub.asymmetricKeyType, + privType: priv && priv.type, + privKeyType: priv && priv.asymmetricKeyType, + }); + }); + }); + `, "/entry.mjs"); + expect(result.code).toBe(0); + expect(result.errorMessage).toBeUndefined(); + expect((result.exports as any).default).toEqual({ + err: null, + pubType: "public", + pubKeyType: "ed25519", + privType: "private", + privKeyType: "ed25519", + }); + }); + + it("generateKeySync and generateKey return secret KeyObjects", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + import crypto from 'node:crypto'; + + export default await new Promise((resolve) => { + const syncKey = crypto.generateKeySync('aes', { length: 256 }); + crypto.generateKey('hmac', { length: 123 }, (err, asyncKey) => { + resolve({ + err: err ? { name: err.name, code: err.code, message: err.message } : null, + syncType: syncKey.type, + syncSize: syncKey.symmetricKeySize, + syncLength: syncKey.export().length, + asyncType: asyncKey && asyncKey.type, + asyncSize: asyncKey && asyncKey.symmetricKeySize, + asyncLength: asyncKey ? asyncKey.export().length : null, + }); + }); + }); + `, "/entry.mjs"); + expect(result.code).toBe(0); + expect(result.errorMessage).toBeUndefined(); + expect((result.exports as any).default).toEqual({ + err: null, + syncType: "secret", + syncSize: 32, + syncLength: 32, + asyncType: "secret", + asyncSize: 15, + asyncLength: 15, + }); + }); + + it("async crypto key APIs throw validation errors synchronously", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + const crypto = require('crypto'); + module.exports = (() => { + try { + crypto.generateKey(undefined, { length: 256 }, () => {}); + return { ok: true }; + } catch (err) { + return { + ok: false, + name: err.name, + code: err.code, + message: err.message, + }; + } + })(); + `); + expect(result.code).toBe(0); + expect(result.errorMessage).toBeUndefined(); + expect(result.exports).toEqual({ + ok: false, + name: "TypeError", + code: "ERR_INVALID_ARG_TYPE", + message: 'The "type" argument must be of type string. Received undefined', + }); + }); + + it("pbkdf2 validates callback and digest arguments with Node-style errors", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + const crypto = require('crypto'); + module.exports = (() => { + const errors = {}; + try { + crypto.pbkdf2('password', 'salt', 8, 8, 'sha256'); + } catch (err) { + errors.missingCallback = { + name: err.name, + code: err.code, + message: err.message, + }; + } + try { + crypto.pbkdf2('password', 'salt', 8, 8, () => {}); + } catch (err) { + errors.missingDigest = { + name: err.name, + code: err.code, + message: err.message, + }; + } + try { + crypto.pbkdf2Sync(1, 'salt', 8, 8, 'sha256'); + } catch (err) { + errors.invalidPassword = { + name: err.name, + code: err.code, + }; + } + return errors; + })(); + `); + expect(result.code).toBe(0); + expect(result.errorMessage).toBeUndefined(); + expect(result.exports).toEqual({ + missingCallback: { + name: "TypeError", + code: "ERR_INVALID_ARG_TYPE", + message: 'The "callback" argument must be of type function. Received undefined', + }, + missingDigest: { + name: "TypeError", + code: "ERR_INVALID_ARG_TYPE", + message: 'The "digest" argument must be of type string. Received undefined', + }, + invalidPassword: { + name: "TypeError", + code: "ERR_INVALID_ARG_TYPE", + }, + }); + }); + + it("generateKeyPair throws DH group validation errors synchronously", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + const crypto = require('crypto'); + module.exports = (() => { + try { + crypto.generateKeyPair('dh', { group: 'modp0' }, () => {}); + return { ok: true }; + } catch (err) { + return { + ok: false, + name: err.name, + code: err.code, + message: err.message, + }; + } + })(); + `); + expect(result.code).toBe(0); + expect(result.errorMessage).toBeUndefined(); + expect(result.exports).toEqual({ + ok: false, + name: "Error", + code: "ERR_CRYPTO_UNKNOWN_DH_GROUP", + message: "Unknown DH group", + }); + }); + + it("generatePrimeSync and generatePrime return valid primes", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + import crypto from 'node:crypto'; + + export default await new Promise((resolve) => { + const syncPrime = crypto.generatePrimeSync(32); + const bigintPrime = crypto.generatePrimeSync(3, { bigint: true }); + crypto.generatePrime(32, (err, asyncPrime) => { + resolve({ + err: err ? { name: err.name, code: err.code, message: err.message } : null, + syncPrime: Buffer.from(syncPrime).toString('base64'), + asyncPrime: asyncPrime ? Buffer.from(asyncPrime).toString('base64') : null, + bigintPrime: bigintPrime.toString(), + }); + }); + }); + `, "/entry.mjs"); + expect(result.code).toBe(0); + expect(result.errorMessage).toBeUndefined(); + const exports = (result.exports as any).default; + expect(exports.err).toBeNull(); + expect(checkPrimeSync(Buffer.from(exports.syncPrime, "base64"))).toBe(true); + expect(checkPrimeSync(Buffer.from(exports.asyncPrime, "base64"))).toBe(true); + expect(exports.bigintPrime).toBe("7"); + }); + + it("generateKeyPairSync preserves host crypto error codes", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + const crypto = require('crypto'); + module.exports = (() => { + try { + crypto.generateKeyPairSync('ec', { + namedCurve: 'P-256', + paramEncoding: 'otherEncoding', + publicKeyEncoding: { type: 'spki', format: 'pem' }, + privateKeyEncoding: { + type: 'pkcs8', + format: 'pem', + cipher: 'aes-128-cbc', + passphrase: 'top secret', + }, + }); + return { ok: true }; + } catch (err) { + return { + ok: false, + name: err.name, + code: err.code, + message: err.message, + }; + } + })(); + `); + expect(result.code).toBe(0); + expect(result.errorMessage).toBeUndefined(); + expect(result.exports).toEqual({ + ok: false, + name: "TypeError", + code: "ERR_INVALID_ARG_VALUE", + message: "The property 'options.paramEncoding' is invalid. Received 'otherEncoding'", + }); + }); + it("createPublicKey and createPrivateKey from PEM strings", async () => { const runtime = await context.createRuntime(); const result = await runtime.run(` @@ -825,6 +1167,87 @@ export function runNodeCryptoSuite(context: NodeSuiteContext): void { expect(exports.valid).toBe(true); }); + it("createPrivateKey preserves metadata for encrypted PEM and accepts passphrase buffers", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + const crypto = require('crypto'); + const { publicKey, privateKey } = crypto.generateKeyPairSync('rsa', { + modulusLength: 1024, + publicKeyEncoding: { type: 'spki', format: 'pem' }, + privateKeyEncoding: { + type: 'pkcs8', + format: 'pem', + cipher: 'aes-256-cbc', + passphrase: '', + }, + }); + const imported = crypto.createPrivateKey({ + key: privateKey, + passphrase: Buffer.alloc(0), + }); + const data = Buffer.from('metadata-roundtrip'); + const signature = crypto.sign('sha256', data, { + key: privateKey, + passphrase: '', + }); + module.exports = { + keyType: imported.type, + asymmetricKeyType: imported.asymmetricKeyType, + valid: crypto.verify('sha256', data, publicKey, signature), + }; + `); + expect(result.code).toBe(0); + expect(result.errorMessage).toBeUndefined(); + expect(result.exports).toEqual({ + keyType: "private", + asymmetricKeyType: "rsa", + valid: true, + }); + }); + + it("publicEncrypt/privateDecrypt accept DER options bags and sandbox KeyObjects", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + const crypto = require('crypto'); + const pairWithKeyObject = crypto.generateKeyPairSync('rsa', { + modulusLength: 1024, + privateKeyEncoding: { type: 'pkcs1', format: 'pem' }, + }); + const derPair = crypto.generateKeyPairSync('rsa', { + modulusLength: 1024, + publicKeyEncoding: { type: 'pkcs1', format: 'der' }, + privateKeyEncoding: { + type: 'pkcs1', + format: 'pem', + cipher: 'aes-256-cbc', + passphrase: 'secret', + }, + }); + const plaintext = Buffer.from('encrypt-roundtrip'); + const encryptedWithKeyObject = crypto.publicEncrypt(pairWithKeyObject.publicKey, plaintext); + const decryptedWithKeyObject = crypto.privateDecrypt(pairWithKeyObject.privateKey, encryptedWithKeyObject); + const encryptedWithDer = crypto.publicEncrypt({ + key: derPair.publicKey, + type: 'pkcs1', + format: 'der', + }, plaintext); + const decryptedWithDer = crypto.privateDecrypt({ + key: derPair.privateKey, + passphrase: 'secret', + }, encryptedWithDer); + module.exports = { + keyObjectRoundTrip: decryptedWithKeyObject.toString(), + derRoundTrip: decryptedWithDer.toString(), + }; + `); + expect(result.code).toBe(0); + expect(result.errorMessage).toBeUndefined(); + expect(result.exports).toEqual({ + keyObjectRoundTrip: "encrypt-roundtrip", + derRoundTrip: "encrypt-roundtrip", + }); + }); + it("KeyObject.export returns PEM by default", async () => { const runtime = await context.createRuntime(); const result = await runtime.run(` @@ -927,11 +1350,55 @@ export function runNodeCryptoSuite(context: NodeSuiteContext): void { expect(result.code).toBe(0); expect(result.errorMessage).toBeUndefined(); const exports = result.exports as any; + expect(exports.hex).toBe("f43738c837258ba3e8b52ee2115a22014ef8a2d4b24c828437462218c17713d0"); expect(exports.length).toBe(64); }); // crypto.subtle (Web Crypto API) tests + it("globalThis.crypto matches require('crypto').webcrypto", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + const crypto = require('crypto'); + module.exports = { + sameObject: globalThis.crypto === crypto.webcrypto, + sameSubtle: globalThis.crypto.subtle === crypto.webcrypto.subtle, + cryptoCtor: globalThis.crypto.constructor.name, + subtleCtor: globalThis.crypto.subtle.constructor.name, + }; + `); + expect(result.code).toBe(0); + expect(result.errorMessage).toBeUndefined(); + expect(result.exports).toEqual({ + sameObject: true, + sameSubtle: true, + cryptoCtor: "SandboxCrypto", + subtleCtor: "SandboxSubtleCrypto", + }); + }); + + it("globalThis.crypto.getRandomValues validates detached receivers", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + const { getRandomValues } = globalThis.crypto; + try { + getRandomValues(new Uint8Array(4)); + module.exports = { code: null }; + } catch (error) { + module.exports = { + name: error.name, + code: error.code, + }; + } + `); + expect(result.code).toBe(0); + expect(result.errorMessage).toBeUndefined(); + expect(result.exports).toEqual({ + name: "TypeError", + code: "ERR_INVALID_THIS", + }); + }); + it("subtle.digest('SHA-256', data) matches createHash output", async () => { const runtime = await context.createRuntime(); const result = await runtime.run(` @@ -1134,6 +1601,107 @@ export function runNodeCryptoSuite(context: NodeSuiteContext): void { expect(exports.privType).toBe("private"); }); + it("subtle.sign/verify RSA-PSS roundtrip", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + (async () => { + const crypto = require('crypto'); + const keyPair = await crypto.subtle.generateKey( + { + name: 'RSA-PSS', + modulusLength: 2048, + publicExponent: new Uint8Array([1, 0, 1]), + hash: 'SHA-256', + }, + true, + ['sign', 'verify'] + ); + const data = new TextEncoder().encode('RSA-PSS signing test'); + const signature = await crypto.subtle.sign( + { name: 'RSA-PSS', saltLength: 32 }, keyPair.privateKey, data + ); + const valid = await crypto.subtle.verify( + { name: 'RSA-PSS', saltLength: 32 }, keyPair.publicKey, signature, data + ); + module.exports = { valid, sigLen: signature.byteLength }; + })(); + `); + expect(result.code).toBe(0); + expect((result.exports as any).valid).toBe(true); + expect((result.exports as any).sigLen).toBe(256); + }); + + it("subtle.sign/verify ECDSA roundtrip", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + (async () => { + const crypto = require('crypto'); + const keyPair = await crypto.subtle.generateKey( + { name: 'ECDSA', namedCurve: 'P-256' }, + true, + ['sign', 'verify'] + ); + const data = new TextEncoder().encode('ECDSA signing test'); + const signature = await crypto.subtle.sign( + { name: 'ECDSA', hash: 'SHA-256' }, keyPair.privateKey, data + ); + const valid = await crypto.subtle.verify( + { name: 'ECDSA', hash: 'SHA-256' }, keyPair.publicKey, signature, data + ); + module.exports = { valid, sigLen: signature.byteLength > 0 }; + })(); + `); + expect(result.code).toBe(0); + expect((result.exports as any).valid).toBe(true); + expect((result.exports as any).sigLen).toBe(true); + }); + + it("subtle.sign/verify Ed25519 roundtrip", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + (async () => { + const crypto = require('crypto'); + const keyPair = await crypto.subtle.generateKey( + { name: 'Ed25519' }, + true, + ['sign', 'verify'] + ); + const data = new TextEncoder().encode('Ed25519 signing test'); + const signature = await crypto.subtle.sign( + { name: 'Ed25519' }, keyPair.privateKey, data + ); + const valid = await crypto.subtle.verify( + { name: 'Ed25519' }, keyPair.publicKey, signature, data + ); + module.exports = { valid, sigLen: signature.byteLength }; + })(); + `); + expect(result.code).toBe(0); + expect((result.exports as any).valid).toBe(true); + expect((result.exports as any).sigLen).toBe(64); + }); + + it("KeyObject.toCryptoKey returns the global CryptoKey type", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + (() => { + const { createSecretKey, randomBytes, KeyObject } = require('crypto'); + const keyObject = createSecretKey(randomBytes(16)); + const cryptoKey = keyObject.toCryptoKey('AES-GCM', true, ['encrypt', 'decrypt']); + const roundTrip = KeyObject.from(cryptoKey); + module.exports = { + instanceofGlobal: cryptoKey instanceof CryptoKey, + type: cryptoKey.type, + match: keyObject.equals(roundTrip), + }; + })(); + `); + expect(result.code).toBe(0); + expect((result.exports as any).instanceofGlobal).toBe(true); + expect((result.exports as any).type).toBe("secret"); + expect((result.exports as any).match).toBe(true); + }); + it("subtle.importKey raw + exportKey raw roundtrip", async () => { const runtime = await context.createRuntime(); const result = await runtime.run(` @@ -1345,4 +1913,165 @@ export function runNodeCryptoSuite(context: NodeSuiteContext): void { expect(exports.match).toBe(true); expect(exports.keyType).toBe("secret"); }); + + it("subtle.deriveBits ECDH matches on both sides", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + (async () => { + const crypto = require('crypto'); + const [alice, bob] = await Promise.all([ + crypto.subtle.generateKey({ name: 'ECDH', namedCurve: 'P-256' }, true, ['deriveBits', 'deriveKey']), + crypto.subtle.generateKey({ name: 'ECDH', namedCurve: 'P-256' }, true, ['deriveBits', 'deriveKey']), + ]); + const [secret1, secret2] = await Promise.all([ + crypto.subtle.deriveBits({ name: 'ECDH', public: bob.publicKey }, alice.privateKey, 128), + crypto.subtle.deriveBits({ name: 'ECDH', public: alice.publicKey }, bob.privateKey, 128), + ]); + module.exports = { + match: Buffer.from(secret1).equals(Buffer.from(secret2)), + len: secret1.byteLength, + }; + })(); + `); + expect(result.code).toBe(0); + expect((result.exports as any).match).toBe(true); + expect((result.exports as any).len).toBe(16); + }); + + it("subtle.deriveKey ECDH produces matching HMAC keys", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + (async () => { + const crypto = require('crypto'); + const [alice, bob] = await Promise.all([ + crypto.subtle.generateKey({ name: 'ECDH', namedCurve: 'P-256' }, true, ['deriveKey']), + crypto.subtle.generateKey({ name: 'ECDH', namedCurve: 'P-256' }, true, ['deriveKey']), + ]); + const [key1, key2] = await Promise.all([ + crypto.subtle.deriveKey( + { name: 'ECDH', public: bob.publicKey }, + alice.privateKey, + { name: 'HMAC', hash: 'SHA-256', length: 256 }, + true, + ['sign', 'verify'] + ), + crypto.subtle.deriveKey( + { name: 'ECDH', public: alice.publicKey }, + bob.privateKey, + { name: 'HMAC', hash: 'SHA-256', length: 256 }, + true, + ['sign', 'verify'] + ), + ]); + const [raw1, raw2] = await Promise.all([ + crypto.subtle.exportKey('raw', key1), + crypto.subtle.exportKey('raw', key2), + ]); + module.exports = { + match: Buffer.from(raw1).equals(Buffer.from(raw2)), + type: key1.type, + }; + })(); + `); + expect(result.code).toBe(0); + expect((result.exports as any).match).toBe(true); + expect((result.exports as any).type).toBe("secret"); + }); + + it("subtle.wrapKey/unwrapKey AES-KW roundtrip", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + (async () => { + const crypto = require('crypto'); + const wrappingKey = await crypto.subtle.generateKey( + { name: 'AES-KW', length: 256 }, + true, + ['wrapKey', 'unwrapKey'] + ); + const keyToWrap = await crypto.subtle.generateKey( + { name: 'AES-GCM', length: 256 }, + true, + ['encrypt', 'decrypt'] + ); + const wrapped = await crypto.subtle.wrapKey( + 'raw', + keyToWrap, + wrappingKey, + { name: 'AES-KW' } + ); + const unwrapped = await crypto.subtle.unwrapKey( + 'raw', + wrapped, + wrappingKey, + { name: 'AES-KW' }, + { name: 'AES-GCM', length: 256 }, + true, + ['encrypt', 'decrypt'] + ); + const [raw1, raw2] = await Promise.all([ + crypto.subtle.exportKey('raw', keyToWrap), + crypto.subtle.exportKey('raw', unwrapped), + ]); + module.exports = { + match: Buffer.from(raw1).equals(Buffer.from(raw2)), + wrappedLen: wrapped.byteLength > 0, + }; + })(); + `); + expect(result.code).toBe(0); + expect((result.exports as any).match).toBe(true); + expect((result.exports as any).wrappedLen).toBe(true); + }); + + it("Diffie-Hellman group exchange preserves Buffer and encoded secret outputs", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + const crypto = require('crypto'); + const alice = crypto.createDiffieHellmanGroup('modp5'); + const bob = crypto.createDiffieHellmanGroup('modp5'); + const aliceKey = alice.generateKeys(); + const bobKeyHex = bob.generateKeys('hex'); + const aliceSecret = alice.computeSecret(bobKeyHex, 'hex', 'base64'); + const bobSecret = bob.computeSecret(aliceKey, 'buffer', 'base64'); + + module.exports = { + match: aliceSecret === bobSecret, + verifyError: alice.verifyError, + publicKeyIsBuffer: Buffer.isBuffer(alice.getPublicKey()), + privateKeyIsBuffer: Buffer.isBuffer(alice.getPrivateKey()), + }; + `); + expect(result.code).toBe(0); + const exports = result.exports as any; + expect(exports.match).toBe(true); + expect(exports.verifyError).toBe(0); + expect(exports.publicKeyIsBuffer).toBe(true); + expect(exports.privateKeyIsBuffer).toBe(true); + }); + + it("stateless crypto.diffieHellman matches x25519 shared secret", async () => { + const runtime = await context.createRuntime(); + const result = await runtime.run(` + const crypto = require('crypto'); + const alice = crypto.generateKeyPairSync('x25519'); + const bob = crypto.generateKeyPairSync('x25519'); + const aliceSecret = crypto.diffieHellman({ + privateKey: alice.privateKey, + publicKey: bob.publicKey, + }).toString('hex'); + const bobSecret = crypto.diffieHellman({ + privateKey: bob.privateKey, + publicKey: alice.publicKey, + }).toString('hex'); + + module.exports = { + match: aliceSecret === bobSecret, + length: aliceSecret.length, + }; + `); + expect(result.code).toBe(0); + const exports = result.exports as any; + expect(exports.match).toBe(true); + expect(exports.length).toBeGreaterThan(0); + }); } diff --git a/packages/typescript/src/index.ts b/packages/typescript/src/index.ts index c7431793..07f65478 100644 --- a/packages/typescript/src/index.ts +++ b/packages/typescript/src/index.ts @@ -83,7 +83,7 @@ type CompilerTools = { const DEFAULT_COMPILER_RUNTIME_MEMORY_LIMIT = 512; const COMPILER_RUNTIME_FILE_PATH = "/root/__secure_exec_typescript_compiler__.js"; -const DEFAULT_COMPILER_SPECIFIER = "/root/node_modules/typescript/lib/typescript.js"; +const DEFAULT_COMPILER_SPECIFIER = "typescript"; export function createTypeScriptTools( options: TypeScriptToolsOptions, diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index e3d63f65..853d0e8b 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -51,6 +51,9 @@ importers: specifier: ^3.24.0 version: 3.25.76 devDependencies: + '@secure-exec/docs-gen': + specifier: workspace:* + version: link:../../scripts/docs-gen '@types/node': specifier: ^22.10.2 version: 22.19.3 @@ -104,6 +107,9 @@ importers: specifier: ^3.24.0 version: 3.25.76 devDependencies: + '@secure-exec/docs-gen': + specifier: workspace:* + version: link:../../scripts/docs-gen '@types/node': specifier: ^22.10.2 version: 22.19.3 @@ -133,6 +139,9 @@ importers: specifier: workspace:* version: link:../../packages/secure-exec devDependencies: + '@secure-exec/docs-gen': + specifier: workspace:* + version: link:../../scripts/docs-gen '@types/node': specifier: ^22.10.2 version: 22.19.3 @@ -152,6 +161,9 @@ importers: specifier: workspace:* version: link:../../packages/secure-exec devDependencies: + '@secure-exec/docs-gen': + specifier: workspace:* + version: link:../../scripts/docs-gen '@types/node': specifier: ^22.10.2 version: 22.19.3 @@ -194,40 +206,46 @@ importers: specifier: ^4.19.2 version: 4.21.0 - examples/plugin-system: + examples/kitchen-sink: dependencies: + '@hono/node-server': + specifier: ^1.13.8 + version: 1.19.9(hono@4.12.2) + '@secure-exec/typescript': + specifier: workspace:* + version: link:../../packages/typescript + hono: + specifier: ^4.7.2 + version: 4.12.2 secure-exec: specifier: workspace:* version: link:../../packages/secure-exec devDependencies: + '@secure-exec/docs-gen': + specifier: workspace:* + version: link:../../scripts/docs-gen '@types/node': specifier: ^22.10.2 version: 22.19.3 - tsx: - specifier: ^4.19.2 - version: 4.21.0 typescript: specifier: ^5.7.2 version: 5.9.3 - examples/quickstart: + examples/plugin-system: dependencies: - '@hono/node-server': - specifier: ^1.13.8 - version: 1.19.9(hono@4.12.2) - '@secure-exec/typescript': - specifier: workspace:* - version: link:../../packages/typescript - hono: - specifier: ^4.7.2 - version: 4.12.2 secure-exec: specifier: workspace:* version: link:../../packages/secure-exec devDependencies: + '@secure-exec/docs-gen': + specifier: workspace:* + version: link:../../scripts/docs-gen '@types/node': specifier: ^22.10.2 version: 22.19.3 + tsx: + specifier: ^4.19.2 + version: 4.21.0 typescript: specifier: ^5.7.2 version: 5.9.3 @@ -522,6 +540,19 @@ importers: specifier: ^6.4.1 version: 6.4.1(@types/node@22.19.3)(tsx@4.21.0) + scripts/docs-gen: + dependencies: + tsx: + specifier: ^4.19.2 + version: 4.21.0 + devDependencies: + '@types/node': + specifier: ^22.10.2 + version: 22.19.3 + typescript: + specifier: ^5.7.2 + version: 5.9.3 + packages: /@ai-sdk/anthropic@3.0.63(zod@3.25.76): diff --git a/pnpm-workspace.yaml b/pnpm-workspace.yaml index 48b8b799..8bf7cf20 100644 --- a/pnpm-workspace.yaml +++ b/pnpm-workspace.yaml @@ -2,3 +2,4 @@ packages: - "packages/*" - "examples/*" - "examples/*/*" + - "scripts/*" diff --git a/scripts/docs-gen/package.json b/scripts/docs-gen/package.json new file mode 100644 index 00000000..6eca883c --- /dev/null +++ b/scripts/docs-gen/package.json @@ -0,0 +1,18 @@ +{ + "name": "@secure-exec/docs-gen", + "private": true, + "type": "module", + "bin": { + "docs-gen": "./src/index.ts" + }, + "scripts": { + "check-types": "tsc --noEmit -p tsconfig.json" + }, + "dependencies": { + "tsx": "^4.19.2" + }, + "devDependencies": { + "@types/node": "^22.10.2", + "typescript": "^5.7.2" + } +} diff --git a/scripts/docs-gen/src/index.ts b/scripts/docs-gen/src/index.ts new file mode 100755 index 00000000..abd767fb --- /dev/null +++ b/scripts/docs-gen/src/index.ts @@ -0,0 +1,268 @@ +#!/usr/bin/env -S tsx + +import { readFile } from "node:fs/promises"; +import path from "node:path"; + +type ImportReplacement = { + from: string; + to: string; +}; + +type TitledBlocksConfig = { + kind: "titledBlocks"; + docsPath: string; + entries: Array<{ + title: string; + examplePath: string; + }>; + importReplacements?: ImportReplacement[]; +}; + +type FirstTsBlockConfig = { + kind: "firstTsBlock"; + docsPath: string; + examplePath: string; + importReplacements?: ImportReplacement[]; +}; + +type MultiFirstTsBlockConfig = { + kind: "multiFirstTsBlock"; + entries: Array<{ + docsPath: string; + examplePath: string; + }>; + importReplacements?: ImportReplacement[]; +}; + +type NamedTsBlockConfig = { + kind: "namedTsBlock"; + docsPath: string; + title: string; + examplePath: string; + importReplacements?: ImportReplacement[]; +}; + +type ContainsConfig = { + kind: "contains"; + docsPath: string; + required: string[]; +}; + +type VerifyConfig = + | TitledBlocksConfig + | FirstTsBlockConfig + | MultiFirstTsBlockConfig + | NamedTsBlockConfig + | ContainsConfig; + +function parseArgs(argv: string[]) { + const [command, ...rest] = argv; + if (command !== "verify") { + throw new Error('Usage: docs-gen verify --config '); + } + + const configIndex = rest.indexOf("--config"); + if (configIndex === -1 || !rest[configIndex + 1]) { + throw new Error('Missing required flag: --config '); + } + + return { + configPath: rest[configIndex + 1], + }; +} + +function normalizeTitle(title: string) { + return title.trim().replace(/^"|"$/g, ""); +} + +function normalizeCode(source: string) { + const normalized = source.replace(/\r\n/g, "\n").replace(/^\n+|\n+$/g, ""); + const lines = normalized.split("\n"); + const nonEmptyLines = lines.filter((line) => line.trim().length > 0); + const minIndent = nonEmptyLines.reduce((indent, line) => { + const lineIndent = line.match(/^ */)?.[0].length ?? 0; + return Math.min(indent, lineIndent); + }, Number.POSITIVE_INFINITY); + + if (!Number.isFinite(minIndent) || minIndent === 0) { + return normalized; + } + + return lines.map((line) => line.slice(minIndent)).join("\n"); +} + +function normalizeImports(source: string, replacements: ImportReplacement[] = []) { + return replacements.reduce( + (result, replacement) => result.replaceAll(replacement.from, replacement.to), + source, + ); +} + +function getFirstTsBlock(source: string) { + const match = source.match(/^\s*```ts(?: [^\n]+)?\n([\s\S]*?)^\s*```/m); + if (!match?.[1]) { + return null; + } + + return normalizeCode(match[1]); +} + +function getNamedTsBlock(source: string, expectedTitle: string) { + const blockPattern = /^\s*```ts(?:\s+([^\n]+))?\n([\s\S]*?)^\s*```/gm; + for (const match of source.matchAll(blockPattern)) { + const rawTitle = match[1]; + if (!rawTitle) continue; + if (normalizeTitle(rawTitle) !== expectedTitle) continue; + return normalizeCode(match[2] ?? ""); + } + return null; +} + +async function verifyTitledBlocks(configDir: string, config: TitledBlocksConfig) { + const docsSource = await readFile(path.resolve(configDir, config.docsPath), "utf8"); + const blockPattern = /^\s*```ts(?:\s+([^\n]+))?\n([\s\S]*?)^\s*```/gm; + const docBlocks = new Map(); + + for (const match of docsSource.matchAll(blockPattern)) { + const rawTitle = match[1]; + if (!rawTitle) continue; + + const title = normalizeTitle(rawTitle); + if (!config.entries.some((entry) => entry.title === title)) { + continue; + } + + docBlocks.set(title, normalizeCode(match[2] ?? "")); + } + + const mismatches: string[] = []; + + for (const entry of config.entries) { + const examplePath = path.resolve(configDir, entry.examplePath); + const exampleSource = await readFile(examplePath, "utf8"); + const normalizedExample = normalizeCode( + normalizeImports(exampleSource, config.importReplacements), + ); + const docSource = docBlocks.get(entry.title); + + if (!docSource) { + mismatches.push(`Missing docs snippet for ${entry.title}`); + continue; + } + + if (docSource !== normalizedExample) { + mismatches.push(`Snippet mismatch for ${entry.title}`); + } + } + + if (mismatches.length > 0) { + throw new Error(mismatches.join("\n")); + } +} + +async function verifyFirstTsBlock(configDir: string, config: FirstTsBlockConfig) { + const docsSource = await readFile(path.resolve(configDir, config.docsPath), "utf8"); + const exampleSource = await readFile(path.resolve(configDir, config.examplePath), "utf8"); + const docBlock = getFirstTsBlock(docsSource); + const normalizedExample = normalizeCode( + normalizeImports(exampleSource, config.importReplacements), + ); + + if (!docBlock) { + throw new Error(`Missing TypeScript example in ${config.docsPath}`); + } + + if (docBlock !== normalizedExample) { + throw new Error(`Snippet mismatch: ${config.docsPath}`); + } +} + +async function verifyMultiFirstTsBlock( + configDir: string, + config: MultiFirstTsBlockConfig, +) { + const mismatches: string[] = []; + + for (const entry of config.entries) { + const docsSource = await readFile(path.resolve(configDir, entry.docsPath), "utf8"); + const exampleSource = await readFile( + path.resolve(configDir, entry.examplePath), + "utf8", + ); + const docBlock = getFirstTsBlock(docsSource); + const normalizedExample = normalizeCode( + normalizeImports(exampleSource, config.importReplacements), + ); + + if (!docBlock) { + mismatches.push(`Missing TypeScript example in ${entry.docsPath}`); + continue; + } + + if (docBlock !== normalizedExample) { + mismatches.push(`Snippet mismatch: ${entry.docsPath}`); + } + } + + if (mismatches.length > 0) { + throw new Error(mismatches.join("\n")); + } +} + +async function verifyNamedTsBlock(configDir: string, config: NamedTsBlockConfig) { + const docsSource = await readFile(path.resolve(configDir, config.docsPath), "utf8"); + const exampleSource = await readFile(path.resolve(configDir, config.examplePath), "utf8"); + const docBlock = getNamedTsBlock(docsSource, config.title); + const normalizedExample = normalizeCode( + normalizeImports(exampleSource, config.importReplacements), + ); + + if (!docBlock) { + throw new Error(`Missing docs snippet for ${config.title}`); + } + + if (docBlock !== normalizedExample) { + throw new Error(`Snippet mismatch for ${config.title}`); + } +} + +async function verifyContains(configDir: string, config: ContainsConfig) { + const docsSource = await readFile(path.resolve(configDir, config.docsPath), "utf8"); + const missing = config.required.filter((value) => !docsSource.includes(value)); + if (missing.length > 0) { + throw new Error( + `${config.docsPath} missing required content:\n${missing.join("\n")}`, + ); + } +} + +async function main() { + const { configPath } = parseArgs(process.argv.slice(2)); + const resolvedConfigPath = path.resolve(process.cwd(), configPath); + const configDir = path.dirname(resolvedConfigPath); + const config = JSON.parse(await readFile(resolvedConfigPath, "utf8")) as VerifyConfig; + + switch (config.kind) { + case "titledBlocks": + await verifyTitledBlocks(configDir, config); + break; + case "firstTsBlock": + await verifyFirstTsBlock(configDir, config); + break; + case "multiFirstTsBlock": + await verifyMultiFirstTsBlock(configDir, config); + break; + case "namedTsBlock": + await verifyNamedTsBlock(configDir, config); + break; + case "contains": + await verifyContains(configDir, config); + break; + default: + throw new Error("Unsupported docs-gen config"); + } + + console.log(`Docs verified: ${path.relative(process.cwd(), resolvedConfigPath)}`); +} + +await main(); diff --git a/scripts/docs-gen/tsconfig.json b/scripts/docs-gen/tsconfig.json new file mode 100644 index 00000000..919c3e2e --- /dev/null +++ b/scripts/docs-gen/tsconfig.json @@ -0,0 +1,12 @@ +{ + "compilerOptions": { + "target": "ES2022", + "module": "NodeNext", + "moduleResolution": "NodeNext", + "strict": true, + "esModuleInterop": true, + "skipLibCheck": true, + "noEmit": true + }, + "include": ["src/**/*.ts"] +} diff --git a/scripts/ralph/.last-branch b/scripts/ralph/.last-branch index d09eb0da..8aaea17a 100644 --- a/scripts/ralph/.last-branch +++ b/scripts/ralph/.last-branch @@ -1 +1 @@ -ralph/kernel-consolidation +ralph/nodejs-conformance-fixes diff --git a/scripts/ralph/archive/2026-03-25-kernel-consolidation/prd.json b/scripts/ralph/archive/2026-03-25-kernel-consolidation/prd.json new file mode 100644 index 00000000..f027053f --- /dev/null +++ b/scripts/ralph/archive/2026-03-25-kernel-consolidation/prd.json @@ -0,0 +1,1007 @@ +{ + "project": "SecureExec", + "branchName": "ralph/kernel-consolidation", + "description": "Kernel Consolidation - Move networking, resource management, and runtime-specific subsystems into the shared kernel so Node.js and WasmVM share the same socket table, port registry, and network stack.", + "userStories": [ + { + "id": "US-001", + "title": "Implement WaitHandle and WaitQueue primitives (K-10)", + "description": "As a developer, I need unified blocking I/O primitives so that all kernel subsystems (pipes, sockets, flock, poll) share the same wait/wake mechanism.", + "acceptanceCriteria": [ + "Add packages/core/src/kernel/wait.ts with WaitHandle and WaitQueue classes", + "WaitHandle.wait(timeoutMs?) returns a Promise that resolves when woken or times out", + "WaitHandle.wake() resolves exactly one waiter", + "WaitQueue.wakeAll() resolves all enqueued waiters", + "WaitQueue.wakeOne() resolves exactly one waiter (FIFO order)", + "Add packages/core/test/kernel/wait-queue.test.ts with tests: wake resolves wait, timeout fires, wakeOne wakes one, wakeAll wakes all", + "Tests pass", + "Typecheck passes" + ], + "priority": 1, + "passes": true, + "notes": "Foundation for all blocking I/O. See spec section 2.4. Keep it simple — just Promise-based wait/wake, no Atomics yet." + }, + { + "id": "US-002", + "title": "Implement InodeTable with refcounting and deferred unlink (K-11)", + "description": "As a developer, I need an inode layer so the VFS supports hard links, deferred deletion, and correct stat() metadata.", + "acceptanceCriteria": [ + "Add packages/core/src/kernel/inode-table.ts with Inode and InodeTable classes", + "InodeTable.allocate(mode, uid, gid) returns Inode with unique ino number", + "incrementLinks/decrementLinks track hard link count (nlink)", + "incrementOpenRefs/decrementOpenRefs track open FD count", + "shouldDelete(ino) returns true when nlink=0 AND openRefCount=0", + "Deferred deletion: unlink with open FDs keeps data until last FD closes", + "Add packages/core/test/kernel/inode-table.test.ts with tests: allocate unique ino, hard link increments nlink, unlink-with-open-FD persists, close-last-FD deletes", + "Tests pass", + "Typecheck passes" + ], + "priority": 2, + "passes": true, + "notes": "See spec section 2.5. Not wired to VFS yet — standalone table only." + }, + { + "id": "US-003", + "title": "Implement HostNetworkAdapter interface (Part 5)", + "description": "As a developer, I need a host adapter interface so the kernel can delegate external I/O to the host without knowing the host implementation.", + "acceptanceCriteria": [ + "Add HostNetworkAdapter, HostSocket, HostListener, HostUdpSocket interfaces to packages/core/src/types.ts or packages/core/src/kernel/host-adapter.ts", + "HostNetworkAdapter has: tcpConnect, tcpListen, udpBind, udpSend, dnsLookup methods", + "HostSocket has: write, read (null=EOF), close, setOption, shutdown methods", + "HostListener has: accept, close, port (readonly) members", + "HostUdpSocket has: recv, close methods", + "Typecheck passes" + ], + "priority": 3, + "passes": true, + "notes": "See spec Part 5. Interfaces only — no implementations yet. Node.js driver will implement these later." + }, + { + "id": "US-004", + "title": "Implement KernelSocket and SocketTable core (K-1)", + "description": "As a developer, I need a virtual socket table in the kernel so sockets can be created, tracked, and closed with proper state transitions.", + "acceptanceCriteria": [ + "Add packages/core/src/kernel/socket-table.ts with KernelSocket struct and SocketTable class", + "KernelSocket has: id, domain (AF_INET/AF_INET6/AF_UNIX), type (SOCK_STREAM/SOCK_DGRAM), state, nonBlocking, localAddr, remoteAddr, options Map, pid, readBuffer, readWaiters (WaitQueue), backlog, acceptWaiters (WaitQueue)", + "SocketTable.create(domain, type, protocol, pid) returns socket ID, tracks in sockets Map", + "SocketTable.close(socketId) removes socket and frees resources", + "SocketTable.poll(socketId) returns { readable, writable, hangup }", + "Per-process isolation: process A cannot close process B's socket", + "EMFILE error when creating too many sockets (configurable limit)", + "Add packages/core/test/kernel/socket-table.test.ts with tests: create socket, state transitions, close frees resources, EMFILE limit, per-process isolation", + "Tests pass", + "Typecheck passes" + ], + "priority": 4, + "passes": true, + "notes": "See spec section 1.1. Does not include bind/listen/connect yet — just create/close/poll lifecycle." + }, + { + "id": "US-005", + "title": "Add bind, listen, accept to SocketTable (K-1, K-3)", + "description": "As a developer, I need server socket operations so the kernel can manage port listeners and accept connections.", + "acceptanceCriteria": [ + "SocketTable.bind(socketId, addr) sets localAddr, registers in listeners Map, transitions to 'bound'", + "SocketTable.listen(socketId, backlog) transitions to 'listening'", + "SocketTable.accept(socketId) returns pending connection or null (EAGAIN)", + "Bind to already-used port returns EADDRINUSE (unless SO_REUSEADDR is set)", + "Close listener frees the port for reuse", + "Wildcard address matching: listener on '0.0.0.0:8080' matches connect to '127.0.0.1:8080'", + "Add tests to socket-table.test.ts: bind/listen/accept lifecycle, EADDRINUSE, port reuse after close, wildcard matching", + "Tests pass", + "Typecheck passes" + ], + "priority": 5, + "passes": true, + "notes": "See spec sections 1.1 and 1.3. Builds on US-004." + }, + { + "id": "US-006", + "title": "Implement loopback routing for TCP (K-2)", + "description": "As a developer, I need in-kernel loopback routing so that connect() to a kernel-owned port creates paired sockets without real TCP.", + "acceptanceCriteria": [ + "SocketTable.connect(socketId, addr) checks if addr matches a kernel listener", + "If loopback: creates socketpair — client socket returned, server socket queued in listener backlog", + "Data written to client side is buffered in server's readBuffer (and vice versa) like pipes", + "accept() returns the server-side socket from the backlog", + "send(socketId, data, flags) writes to peer's readBuffer and wakes readWaiters", + "recv(socketId, maxBytes, flags) reads from own readBuffer, returns null if empty and non-blocking", + "Close client → server gets EOF (recv returns null). Close server → client gets EOF", + "Add packages/core/test/kernel/loopback.test.ts: connect to listener, exchange data bidirectionally, close propagates EOF, loopback never calls host adapter", + "Tests pass", + "Typecheck passes" + ], + "priority": 6, + "passes": true, + "notes": "See spec section 1.2. If addr does not match a kernel listener, connect() should throw/error for now (external routing added later)." + }, + { + "id": "US-007", + "title": "Add shutdown() and half-close support (K-1)", + "description": "As a developer, I need TCP half-close so that shutdown(SHUT_WR) sends EOF to the peer without closing the socket.", + "acceptanceCriteria": [ + "SocketTable.shutdown(socketId, 'read' | 'write' | 'both') updates socket state", + "shutdown('write') transitions to 'write-closed' — peer recv() gets EOF, but local recv() still works", + "shutdown('read') transitions to 'read-closed' — local recv() returns EOF immediately", + "shutdown('both') transitions to 'closed'", + "send() on write-closed socket returns EPIPE", + "Add packages/core/test/kernel/socket-shutdown.test.ts: half-close write, half-close read, full shutdown, EPIPE on write-closed", + "Tests pass", + "Typecheck passes" + ], + "priority": 7, + "passes": true, + "notes": "See spec section 1.1 (read-closed/write-closed states) and shutdown semantics." + }, + { + "id": "US-008", + "title": "Add socketpair() support (K-1, K-5)", + "description": "As a developer, I need socketpair() so that two connected sockets can be created atomically for IPC.", + "acceptanceCriteria": [ + "SocketTable.socketpair(domain, type, protocol, pid) returns [socketId1, socketId2]", + "Both sockets are pre-connected — data written to one appears in the other's readBuffer", + "Close one side delivers EOF to the other", + "Works for AF_UNIX + SOCK_STREAM", + "Add tests: create pair, exchange data, close one side delivers EOF", + "Tests pass", + "Typecheck passes" + ], + "priority": 8, + "passes": true, + "notes": "See spec section 1.1 (socketpair) and 1.5 (Unix domain sockets). Reuses loopback data path from US-006." + }, + { + "id": "US-009", + "title": "Add socket options support (K-6)", + "description": "As a developer, I need setsockopt/getsockopt so kernel sockets can be configured with SO_REUSEADDR, TCP_NODELAY, etc.", + "acceptanceCriteria": [ + "SocketTable.setsockopt(socketId, level, optname, optval) stores option in socket's options Map", + "SocketTable.getsockopt(socketId, level, optname) retrieves option value", + "SO_REUSEADDR is enforced by bind() (already in US-005 — verify integration)", + "SO_RCVBUF / SO_SNDBUF set kernel buffer size limits", + "Add to socket-table.test.ts: set SO_REUSEADDR allows port reuse, set SO_RCVBUF enforces buffer limit", + "Tests pass", + "Typecheck passes" + ], + "priority": 9, + "passes": true, + "notes": "See spec section 1.6. For loopback sockets most options are kernel-enforced. For external sockets, options are forwarded to host adapter (later)." + }, + { + "id": "US-010", + "title": "Add socket flags: MSG_PEEK, MSG_DONTWAIT, MSG_NOSIGNAL (K-1)", + "description": "As a developer, I need socket send/recv flags so code can peek at data or do non-blocking one-off operations.", + "acceptanceCriteria": [ + "recv() with MSG_PEEK reads data without consuming it from readBuffer", + "recv() with MSG_DONTWAIT returns EAGAIN if no data (even on blocking socket)", + "send() with MSG_NOSIGNAL returns EPIPE instead of raising SIGPIPE on broken connection", + "Add packages/core/test/kernel/socket-flags.test.ts: MSG_PEEK leaves data in buffer, MSG_DONTWAIT returns EAGAIN, MSG_NOSIGNAL suppresses SIGPIPE", + "Tests pass", + "Typecheck passes" + ], + "priority": 10, + "passes": true, + "notes": "See spec section 1.1 flags comments." + }, + { + "id": "US-011", + "title": "Implement network permissions in kernel (K-7)", + "description": "As a developer, I need kernel-level network permission checks so all socket operations go through deny-by-default policy.", + "acceptanceCriteria": [ + "Add Kernel.checkNetworkPermission(op, addr) method", + "connect() to external addresses checks permission — EACCES if denied", + "listen() checks permission — EACCES if denied", + "send() to external addresses checks permission — EACCES if denied", + "Loopback connections (to kernel-owned ports) are always allowed regardless of policy", + "Add packages/core/test/kernel/network-permissions.test.ts: deny-by-default blocks external, allow-list permits specific hosts, loopback always allowed", + "Tests pass", + "Typecheck passes" + ], + "priority": 11, + "passes": true, + "notes": "See spec section 1.7. Replaces scattered SSRF validation in driver.ts." + }, + { + "id": "US-012", + "title": "Add external connection routing via host adapter", + "description": "As a developer, I need connect() to external addresses to route through the host adapter so the kernel can reach the real network.", + "acceptanceCriteria": [ + "SocketTable.connect() for non-loopback addresses calls hostAdapter.tcpConnect(host, port)", + "Data relay: send() on kernel socket writes to HostSocket, HostSocket.read() feeds kernel readBuffer", + "close() on kernel socket calls HostSocket.close()", + "Permission check via kernel.checkNetworkPermission() before host adapter call", + "Add a mock HostNetworkAdapter for testing", + "Add tests: connect to external via mock adapter, data flows through, close propagates", + "Tests pass", + "Typecheck passes" + ], + "priority": 12, + "passes": true, + "notes": "Wires the host adapter interface (US-003) to the socket table. Uses mock adapter in tests." + }, + { + "id": "US-013", + "title": "Add external server socket routing via host adapter", + "description": "As a developer, I need listen() to optionally create real TCP listeners via the host adapter for external-facing servers.", + "acceptanceCriteria": [ + "When listen() is called with an external-facing flag, kernel calls hostAdapter.tcpListen(host, port)", + "HostListener.accept() feeds new kernel sockets into the listener's backlog", + "HostListener.port returns the actual bound port (for port 0 ephemeral ports)", + "close() on listener calls HostListener.close()", + "Add tests with mock adapter: external listen, accept incoming, exchange data, close", + "Tests pass", + "Typecheck passes" + ], + "priority": 13, + "passes": true, + "notes": "Needed for http.createServer() to accept real TCP connections from outside the sandbox." + }, + { + "id": "US-014", + "title": "Implement UDP sockets in kernel (K-4)", + "description": "As a developer, I need SOCK_DGRAM support so the kernel handles UDP send/recv with message boundary preservation.", + "acceptanceCriteria": [ + "SocketTable.create() with SOCK_DGRAM type creates a datagram socket", + "sendTo(socketId, data, flags, destAddr) sends to specific address", + "recvFrom(socketId, maxBytes, flags) returns { data, srcAddr }", + "Loopback UDP: sendTo a kernel-bound UDP port delivers to that socket's readBuffer", + "Message boundaries preserved: two 100-byte sends produce two 100-byte recvs", + "Send to unbound port is silently dropped (UDP semantics)", + "External UDP routes through hostAdapter.udpBind/udpSend", + "Add packages/core/test/kernel/udp-socket.test.ts: loopback dgram, message boundaries, silent drop, external routing via mock", + "Tests pass", + "Typecheck passes" + ], + "priority": 14, + "passes": true, + "notes": "See spec section 1.4. Max datagram 65535 bytes, max queue depth 128." + }, + { + "id": "US-015", + "title": "Implement Unix domain sockets in kernel (K-5)", + "description": "As a developer, I need AF_UNIX sockets so processes can communicate via VFS paths.", + "acceptanceCriteria": [ + "bind(socketId, { path: '/tmp/my.sock' }) creates a socket file in the VFS", + "connect(socketId, { path: '/tmp/my.sock' }) connects to the bound socket via kernel", + "Always in-kernel routing (no host adapter)", + "Support both SOCK_STREAM and SOCK_DGRAM modes", + "stat() on socket path returns socket file type", + "Bind to existing path returns EADDRINUSE", + "Remove socket file → new connections fail with ECONNREFUSED", + "Add packages/core/test/kernel/unix-socket.test.ts: bind/connect/exchange data, socket file in VFS, EADDRINUSE, ECONNREFUSED after unlink", + "Tests pass", + "Typecheck passes" + ], + "priority": 15, + "passes": true, + "notes": "See spec section 1.5. Requires VFS integration for socket file entries." + }, + { + "id": "US-016", + "title": "Expose SocketTable on KernelImpl", + "description": "As a developer, I need the socket table accessible from KernelImpl so runtimes can call kernel.socketTable.*.", + "acceptanceCriteria": [ + "KernelImpl constructor creates a SocketTable instance", + "kernel.socketTable is publicly accessible", + "kernel.dispose() cleans up all sockets", + "Socket creation respects kernel process table (pid must exist)", + "Process exit cleans up all sockets owned by that process", + "Add integration test in existing kernel tests: create kernel, create socket, dispose kernel, verify cleanup", + "Tests pass", + "Typecheck passes" + ], + "priority": 16, + "passes": true, + "notes": "Wires socket table into the existing kernel. After this, runtimes can start using kernel sockets." + }, + { + "id": "US-017", + "title": "Implement kernel TimerTable (N-5, N-8)", + "description": "As a developer, I need a kernel timer table so timer ownership is tracked per-process with budget enforcement.", + "acceptanceCriteria": [ + "Add packages/core/src/kernel/timer-table.ts with TimerTable class", + "createTimer(pid, delayMs, repeat, callback) returns timer ID and tracks ownership", + "clearTimer(timerId) cancels and removes timer", + "enforceLimit(pid, maxTimers) throws when budget exceeded", + "clearAllForProcess(pid) removes all timers for a process on exit", + "Timer in process A cannot be cleared by process B", + "Add packages/core/test/kernel/timer-table.test.ts: create/clear, budget enforcement, process cleanup, cross-process isolation", + "Tests pass", + "Typecheck passes" + ], + "priority": 17, + "passes": true, + "notes": "See spec section 2.1. Host adapter provides actual setTimeout/setInterval scheduling." + }, + { + "id": "US-018", + "title": "Implement kernel handle table (N-7, N-9)", + "description": "As a developer, I need kernel-level active handle tracking so resource budgets are enforced per-process.", + "acceptanceCriteria": [ + "Extend ProcessEntry in kernel process table with activeHandles Map and handleLimit", + "registerHandle(pid, id, description) tracks a handle", + "unregisterHandle(pid, id) removes it", + "Registering beyond handleLimit throws error", + "Process exit cleans up all handles", + "Add tests to existing process table tests: register/unregister, limit enforcement, cleanup on exit", + "Tests pass", + "Typecheck passes" + ], + "priority": 18, + "passes": true, + "notes": "See spec section 2.2. Simple Map-based tracking on existing ProcessEntry." + }, + { + "id": "US-019", + "title": "Implement kernel DNS cache (N-10)", + "description": "As a developer, I need a shared DNS cache so both runtimes avoid redundant lookups.", + "acceptanceCriteria": [ + "Add packages/core/src/kernel/dns-cache.ts with DnsCache class", + "lookup(hostname, rrtype) returns cached result or null", + "store(hostname, rrtype, result, ttl) caches with expiry", + "Expired entries return null on lookup", + "flush() clears all entries", + "Add packages/core/test/kernel/dns-cache.test.ts: cache hit, cache miss, TTL expiry, flush", + "Tests pass", + "Typecheck passes" + ], + "priority": 19, + "passes": true, + "notes": "See spec section 2.3. Runtimes call kernel DNS before host adapter." + }, + { + "id": "US-020", + "title": "Implement signal handler registry with sigaction semantics (K-8)", + "description": "As a developer, I need full POSIX signal handling so processes can register handlers with sa_mask and SA_RESTART.", + "acceptanceCriteria": [ + "Add SignalHandler and ProcessSignalState types in kernel", + "sigaction(pid, signal, handler, mask, flags) registers handler", + "Signal delivery: 'ignore' discards, 'default' applies kernel action, function invokes handler", + "SA_RESTART: interrupted blocking syscall restarts after handler returns", + "sigprocmask(pid, how, set): SIG_BLOCK/SIG_UNBLOCK/SIG_SETMASK modify blocked signals", + "Signals delivered while blocked are queued in pendingSignals", + "Standard signals (1-31) coalesce: max 1 pending per signal number", + "Add packages/core/test/kernel/signal-handlers.test.ts: register handler, SA_RESTART, sigprocmask block/unblock, coalescing", + "Tests pass", + "Typecheck passes" + ], + "priority": 20, + "passes": true, + "notes": "See spec section 2.6. Builds on existing kernel signal delivery." + }, + { + "id": "US-021", + "title": "Implement Node.js HostNetworkAdapter", + "description": "As a developer, I need a concrete HostNetworkAdapter implementation using node:net and node:dgram so the kernel can delegate external I/O.", + "acceptanceCriteria": [ + "Add HostNetworkAdapter implementation in the Node.js driver package (packages/nodejs/ or packages/secure-exec/)", + "tcpConnect(host, port) creates real TCP connection via node:net and returns HostSocket", + "tcpListen(host, port) creates real TCP server and returns HostListener", + "udpBind(host, port) creates real UDP socket via node:dgram and returns HostUdpSocket", + "dnsLookup(hostname, rrtype) uses node:dns", + "HostSocket.write/read/close/setOption/shutdown delegate to real net.Socket", + "HostListener.accept/close/port delegate to real net.Server", + "Typecheck passes" + ], + "priority": 21, + "passes": true, + "notes": "Concrete implementation of interfaces from US-003. Testing will be via integration tests with real sockets." + }, + { + "id": "US-022", + "title": "Migrate Node.js FD table to kernel (N-1)", + "description": "As a developer, I need the Node.js bridge to use the kernel FD table so file descriptors are shared across runtimes.", + "acceptanceCriteria": [ + "Remove fdTable Map and nextFd counter from bridge/fs.ts", + "All fdTable.get(fd)/fdTable.set(fd) calls replaced with kernel.fdTable.open()/read()/close() etc.", + "Kernel ProcessFDTable is used for FD allocation", + "Existing fs tests still pass", + "Typecheck passes" + ], + "priority": 22, + "passes": true, + "notes": "See spec section 3.1. Wire bridge to existing kernel ProcessFDTable." + }, + { + "id": "US-023", + "title": "Migrate Node.js net.connect to kernel sockets (N-4)", + "description": "As a developer, I need net.connect() to route through kernel.socketTable.connect() so connections share the kernel socket lifecycle.", + "acceptanceCriteria": [ + "Remove activeNetSockets Map from bridge/network.ts", + "Remove netSockets Map from bridge-handlers.ts (if it exists)", + "net.connect() calls kernel.socketTable.create() then kernel.socketTable.connect()", + "Data flows through kernel socket send/recv", + "Socket close calls kernel.socketTable.close()", + "Existing net tests still pass", + "Typecheck passes" + ], + "priority": 23, + "passes": true, + "notes": "See spec section 3.3. Depends on socket table being wired to kernel (US-016) and host adapter (US-021)." + }, + { + "id": "US-024", + "title": "Migrate Node.js http.createServer to kernel sockets (N-2, N-3)", + "description": "As a developer, I need http.createServer() to use kernel.socketTable.listen() so loopback HTTP works without real TCP.", + "acceptanceCriteria": [ + "http.createServer().listen(port) calls kernel.socketTable.create() → bind() → listen()", + "For loopback: incoming connections from kernel connect() are kernel sockets", + "For external: kernel calls hostAdapter.tcpListen() for real TCP", + "Remove servers Map, ownedServerPorts Set from driver.ts", + "Remove serverRequestListeners Map from bridge/network.ts", + "HTTP protocol parsing stays in the bridge layer (not kernel)", + "Existing HTTP tests still pass", + "Typecheck passes" + ], + "priority": 24, + "passes": true, + "notes": "See spec section 3.2. Highest ROI — unlocks 492 Node.js conformance tests (FIX-01)." + }, + { + "id": "US-025", + "title": "Migrate Node.js SSRF validation to kernel (N-11)", + "description": "As a developer, I need SSRF validation in the kernel so it applies to all runtimes uniformly.", + "acceptanceCriteria": [ + "Remove SSRF validation logic from driver.ts NetworkAdapter", + "Remove ownedServerPorts whitelist from driver.ts", + "kernel.checkNetworkPermission() handles all SSRF checks", + "Loopback to kernel-owned ports is always allowed", + "External connections checked against kernel permission policy", + "Existing SSRF/permission tests still pass", + "Typecheck passes" + ], + "priority": 25, + "passes": true, + "notes": "See spec section 3.5. Depends on kernel network permissions (US-011)." + }, + { + "id": "US-026", + "title": "Migrate Node.js child process registry to kernel (N-6)", + "description": "As a developer, I need child process tracking in the kernel process table so all runtimes share process state.", + "acceptanceCriteria": [ + "Remove activeChildren Map from bridge/child-process.ts", + "Bridge calls kernel.processTable.register() on spawn", + "Bridge queries kernel.processTable.get() for child state/events", + "waitpid/kill route through kernel process table", + "Existing child process tests still pass", + "Typecheck passes" + ], + "priority": 26, + "passes": true, + "notes": "See spec section 3.4." + }, + { + "id": "US-027", + "title": "Route WasmVM socket create/connect through kernel", + "description": "As a developer, I need existing WasmVM TCP to route through the kernel socket table instead of the driver's private _sockets Map.", + "acceptanceCriteria": [ + "WasmVM driver.ts: remove _sockets Map and _nextSocketId counter", + "netSocket handler calls kernel.socketTable.create() instead of allocating local ID", + "netConnect handler calls kernel.socketTable.connect()", + "netSend handler calls kernel.socketTable.send()", + "netRecv handler calls kernel.socketTable.recv()", + "netClose handler calls kernel.socketTable.close()", + "kernel-worker.ts: localToKernelFd maps local WASM FDs to kernel socket FDs", + "Existing WasmVM network tests still pass", + "Typecheck passes" + ], + "priority": 27, + "passes": true, + "notes": "See spec section 4.2. Migrates existing working TCP to kernel routing." + }, + { + "id": "US-028", + "title": "Add bind/listen/accept WASI extensions for WasmVM server sockets", + "description": "As a developer, I need WASI extensions for server sockets so WasmVM programs can accept TCP connections.", + "acceptanceCriteria": [ + "Add net_bind, net_listen, net_accept to host_net module in native/wasmvm/crates/wasi-ext/src/lib.rs", + "Add safe Rust wrappers following existing pattern (pub fn bind, listen, accept)", + "kernel-worker.ts: add net_bind, net_listen, net_accept import handlers that call kernel.socketTable", + "driver.ts: add kernelSocketBind, kernelSocketListen, kernelSocketAccept RPC handlers", + "Typecheck passes" + ], + "priority": 28, + "passes": true, + "notes": "See spec sections 4.3 and 4.5. Rust WASI extensions + JS kernel worker handlers." + }, + { + "id": "US-029", + "title": "Add C sysroot patches for bind/listen/accept", + "description": "As a developer, I need C libc implementations of bind(), listen(), accept() that call the WASI host imports.", + "acceptanceCriteria": [ + "Extend 0008-sockets.patch or create new patch with bind(), listen(), accept() in host_socket.c", + "bind() serializes sockaddr and calls __host_net_bind", + "listen() calls __host_net_listen", + "accept() calls __host_net_accept, maps returned FD, deserializes remote address", + "Patch applies cleanly on wasi-libc", + "Typecheck passes" + ], + "priority": 29, + "passes": true, + "notes": "See spec section 4.4 (server socket C code). Builds on existing 0008-sockets.patch pattern." + }, + { + "id": "US-030", + "title": "Add sendto/recvfrom WASI extensions for WasmVM UDP", + "description": "As a developer, I need WASI extensions for UDP so WasmVM programs can send/receive datagrams.", + "acceptanceCriteria": [ + "Add net_sendto, net_recvfrom to host_net module in lib.rs", + "Add safe Rust wrappers", + "kernel-worker.ts: add net_sendto, net_recvfrom import handlers routing through kernel.socketTable", + "driver.ts: add kernelSocketSendTo, kernelSocketRecvFrom RPC handlers", + "Typecheck passes" + ], + "priority": 30, + "passes": true, + "notes": "See spec sections 4.3 and 4.5. UDP extensions for WasmVM." + }, + { + "id": "US-031", + "title": "Add C sysroot patches for sendto/recvfrom and AF_UNIX", + "description": "As a developer, I need C libc sendto(), recvfrom() implementations and AF_UNIX support in sockaddr serialization.", + "acceptanceCriteria": [ + "Add sendto() to host_socket.c patch — serializes dest addr, calls __host_net_sendto", + "Add recvfrom() to host_socket.c patch — calls __host_net_recvfrom, deserializes src addr", + "Add AF_UNIX support in sockaddr_to_string() / string_to_sockaddr() — handles struct sockaddr_un", + "Patch applies cleanly", + "Typecheck passes" + ], + "priority": 31, + "passes": true, + "notes": "See spec section 4.4 (UDP and AF_UNIX C code)." + }, + { + "id": "US-032", + "title": "Add WasmVM server socket C test program and test", + "description": "As a developer, I need a C test program that exercises bind→listen→accept→recv→send→close through the WasmVM.", + "acceptanceCriteria": [ + "Add native/wasmvm/c/programs/tcp_server.c that: socket() → bind(port) → listen() → accept() → recv() → send('pong') → close()", + "Add tcp_server to PATCHED_PROGRAMS in Makefile", + "Add packages/wasmvm/test/net-server.test.ts that: spawns tcp_server as WASM, connects from kernel as client, verifies data exchange", + "Tests pass", + "Typecheck passes" + ], + "priority": 32, + "passes": true, + "notes": "See spec section 4.9. Integration test for WasmVM server sockets." + }, + { + "id": "US-033", + "title": "Add WasmVM UDP C test program and test", + "description": "As a developer, I need a C test program that exercises UDP send/recv through the WasmVM.", + "acceptanceCriteria": [ + "Add native/wasmvm/c/programs/udp_echo.c that: socket(SOCK_DGRAM) → bind() → recvfrom() → sendto() (echo server)", + "Add udp_echo to PATCHED_PROGRAMS in Makefile", + "Add packages/wasmvm/test/net-udp.test.ts that: spawns udp_echo as WASM, sends datagram, verifies echo response, verifies message boundaries", + "Tests pass", + "Typecheck passes" + ], + "priority": 33, + "passes": true, + "notes": "See spec section 4.9." + }, + { + "id": "US-034", + "title": "Add WasmVM Unix domain socket C test program and test", + "description": "As a developer, I need a C test program that exercises AF_UNIX sockets through the WasmVM.", + "acceptanceCriteria": [ + "Add native/wasmvm/c/programs/unix_socket.c that: socket(AF_UNIX) → bind('/tmp/test.sock') → listen() → accept() → recv/send", + "Add unix_socket to PATCHED_PROGRAMS in Makefile", + "Add packages/wasmvm/test/net-unix.test.ts that: spawns unix_socket WASM, connects from kernel, verifies data exchange", + "Tests pass", + "Typecheck passes" + ], + "priority": 34, + "passes": true, + "notes": "See spec section 4.9." + }, + { + "id": "US-035", + "title": "Add WasmVM signal handler WASI extension and C test", + "description": "As a developer, I need sigaction() support in WasmVM so WASM programs can register signal handlers.", + "acceptanceCriteria": [ + "Add net_sigaction WASI extension to lib.rs (registers handler function pointer + mask + flags)", + "kernel-worker.ts: store handler pointer in kernel process table on sigaction call", + "Signal delivery at syscall boundary: check pendingSignals bitmask, invoke WASM trampoline", + "Add __wasi_signal_trampoline export in C sysroot patch", + "Add native/wasmvm/c/programs/signal_handler.c: sigaction(SIGINT, handler) → busy loop → verify handler called", + "Add packages/wasmvm/test/signal-handler.test.ts: spawn signal_handler, deliver SIGINT via kernel, verify handler fires", + "Tests pass", + "Typecheck passes" + ], + "priority": 35, + "passes": true, + "notes": "See spec sections 4.8 and 4.9. Cooperative delivery at syscall boundaries." + }, + { + "id": "US-036", + "title": "Add cross-runtime network integration test", + "description": "As a developer, I need to verify that WasmVM and Node.js can communicate via kernel sockets.", + "acceptanceCriteria": [ + "Add packages/secure-exec/tests/kernel/cross-runtime-network.test.ts (or packages/core/test/kernel/)", + "Test: WasmVM tcp_server on port 9090, Node.js net.connect(9090) — verify data exchange", + "Test: Node.js http.createServer on port 8080, WasmVM curl-like client connects — verify response", + "Verify loopback: neither connection touches the host network stack", + "Tests pass", + "Typecheck passes" + ], + "priority": 36, + "passes": true, + "notes": "See spec Part 6 cross-runtime integration test. The signature test that kernel consolidation works." + }, + { + "id": "US-037", + "title": "Run Node.js conformance suite and update expectations for HTTP server tests", + "description": "As a developer, I need to re-run the 492 FIX-01 HTTP server tests and reclassify ones that now pass.", + "acceptanceCriteria": [ + "Run packages/secure-exec/tests/node-conformance/runner.test.ts for FIX-01 tests", + "Remove expectations.json entries for tests that now genuinely pass", + "Update remaining entries with specific failure reasons (not vague 'fails in sandbox')", + "Update docs-internal/nodejs-compat-roadmap.md pass counts", + "Tests pass", + "Typecheck passes" + ], + "priority": 37, + "passes": true, + "notes": "See spec section 7.3. This is the conformance payoff from the kernel consolidation." + }, + { + "id": "US-038", + "title": "Run Node.js conformance suite and update expectations for dgram/net/tls tests", + "description": "As a developer, I need to re-run dgram, net, tls, https, http2 tests and reclassify from unsupported-module to specific reasons.", + "acceptanceCriteria": [ + "Re-run all 76 dgram tests — remove expectations for tests that now pass", + "Re-run https/tls/net glob tests — reclassify from unsupported-module to specific failure reasons", + "Update docs-internal/nodejs-compat-roadmap.md with new pass counts", + "Tests pass", + "Typecheck passes" + ], + "priority": 38, + "passes": true, + "notes": "See spec section 7.3. Reclassify stale glob categorizations." + }, + { + "id": "US-039", + "title": "Proofing: adversarial review of kernel implementation completeness", + "description": "As a developer, I need a full audit verifying no networking code bypasses the kernel in either runtime.", + "acceptanceCriteria": [ + "Verify: packages/nodejs driver.ts has no servers Map, ownedServerPorts Set, netSockets Map, upgradeSockets Map", + "Verify: packages/nodejs bridge/network.ts has no serverRequestListeners Map, activeNetSockets Map", + "Verify: packages/wasmvm driver.ts has no _sockets Map, _nextSocketId counter", + "Verify: all http.createServer() routes through kernel.socketTable.listen()", + "Verify: all net.connect() routes through kernel.socketTable.connect()", + "Verify: SSRF validation is only in kernel, not in host adapter", + "Document any remaining gaps as new stories if found", + "Typecheck passes" + ], + "priority": 39, + "passes": true, + "notes": "See spec section 7.1. This is the final proofing pass." + }, + { + "id": "US-040", + "title": "Remove legacy networking Maps from Node.js driver and bridge", + "description": "As a developer, I need to complete the legacy code removal that US-023/024/025 deferred so all networking routes exclusively through the kernel.", + "acceptanceCriteria": [ + "Remove `servers` Map (line ~294) from packages/nodejs/src/driver.ts and all references to it (httpServerListen, httpServerClose handlers)", + "Remove `ownedServerPorts` Set (line ~296) from driver.ts and all references (fetch, httpRequest SSRF checks)", + "Remove `upgradeSockets` Map (line ~298) from driver.ts and all references (upgrade handlers)", + "Remove `activeNetSockets` Map (line ~2042) from packages/nodejs/src/bridge/network.ts and all references (dispatch routing, connect)", + "All HTTP server operations route through kernel.socketTable — verify with grep: no direct net.Server or http.Server creation in driver.ts outside of HostNetworkAdapter", + "All net.connect operations route through kernel.socketTable — verify with grep: no direct net.Socket creation in bridge/network.ts outside of HostNetworkAdapter", + "SSRF validation uses only kernel.checkNetworkPermission, not ownedServerPorts", + "Existing tests pass: run `pnpm vitest run packages/secure-exec/tests/test-suite/node.test.ts` and `pnpm vitest run packages/secure-exec/tests/runtime-driver/`", + "Tests pass", + "Typecheck passes" + ], + "priority": 40, + "passes": true, + "notes": "Addresses review finding H-1. US-024 added kernel socket path alongside legacy adapter path but never removed the legacy path. US-039 audit rationalized this as 'fallback' — it must be removed now. Read docs-internal/reviews/kernel-consolidation-prd-review.md for context." + }, + { + "id": "US-041", + "title": "Fix CI crossterm build and verify WASM test programs compile and run", + "description": "As a developer, I need CI to pass on this branch so WASM binaries are built and skip-guarded tests actually execute.", + "acceptanceCriteria": [ + "Identify and fix the crossterm crate compilation failure for wasm32-wasip1 (likely needs feature gate or dependency exclusion in native/wasmvm/crates/)", + "Run `cd native/wasmvm && make wasm` locally — all WASM command binaries build successfully in target/wasm32-wasip1/release/commands/", + "Run `cd native/wasmvm/c && make` — all PATCHED_PROGRAMS (including tcp_server, udp_echo, unix_socket, signal_handler) compile to c/build/", + "Run `pnpm vitest run packages/wasmvm/test/net-server.test.ts` — tests execute (not skipped) and pass", + "Run `pnpm vitest run packages/wasmvm/test/net-udp.test.ts` — tests execute (not skipped) and pass", + "Run `pnpm vitest run packages/wasmvm/test/net-unix.test.ts` — tests execute (not skipped) and pass", + "Run `pnpm vitest run packages/wasmvm/test/signal-handler.test.ts` — tests execute (not skipped) and pass", + "If any C sysroot patch (0008-sockets.patch, 0011-sigaction.patch) fails to apply, fix the patch hunks", + "Tests pass", + "Typecheck passes" + ], + "priority": 41, + "passes": true, + "notes": "Addresses review findings H-2, H-3, S-1. The C programs and patches were committed by US-029/031/032-035 but never compiled or tested because WASM binaries were never built. This story requires the Rust toolchain (rustup will install from rust-toolchain.toml) and wasm-opt/binaryen." + }, + { + "id": "US-042", + "title": "Wire kernel TimerTable and handle tracking to Node.js bridge", + "description": "As a developer, I need the Node.js bridge to use kernel timer and handle tracking so resource budgets are kernel-enforced.", + "acceptanceCriteria": [ + "KernelImpl constructor creates a TimerTable instance and exposes it as kernel.timerTable", + "In packages/nodejs/src/bridge/process.ts: replace bridge-local `_timerId` counter (line ~975) and `_timers`/`_intervals` Maps (lines ~976-977) with calls to kernel.timerTable.createTimer() and kernel.timerTable.clearTimer()", + "In packages/nodejs/src/bridge/active-handles.ts: replace bridge-local `_activeHandles` Map (line ~18) with calls to kernel processTable.registerHandle()/unregisterHandle()", + "Timer budget enforcement works: setting a timer limit on the kernel causes excess setTimeout calls to throw", + "Handle budget enforcement works: setting a handle limit causes excess handle registrations to throw", + "Process exit cleans up all timers and handles for that process via kernel", + "Existing timer tests pass: run `pnpm vitest run packages/secure-exec/tests/test-suite/node.test.ts`", + "Tests pass", + "Typecheck passes" + ], + "priority": 42, + "passes": true, + "notes": "Addresses review finding H-12. US-017 created TimerTable and US-018 added handle tracking to ProcessTable, but neither was wired to the Node.js bridge. The bridge still uses bridge-local Maps. This story connects the kernel infrastructure to the runtime." + }, + { + "id": "US-043", + "title": "Route WasmVM setsockopt through kernel instead of ENOSYS", + "description": "As a developer, I need WasmVM setsockopt to route through the kernel SocketTable so socket options actually work for WASM programs.", + "acceptanceCriteria": [ + "In packages/wasmvm/src/kernel-worker.ts: replace the ENOSYS stub at line ~984-987 in net_setsockopt with a call that routes through RPC to the kernel", + "In packages/wasmvm/src/driver.ts: add a kernelSocketSetopt RPC handler that calls kernel.socketTable.setsockopt(socketId, level, optname, optval)", + "Add getsockopt support similarly: kernel-worker net_getsockopt routes through RPC to kernel.socketTable.getsockopt()", + "Add test to packages/wasmvm/test/net-socket.test.ts: WASM program calls setsockopt(SO_REUSEADDR) and it succeeds (no ENOSYS)", + "Tests pass", + "Typecheck passes" + ], + "priority": 43, + "passes": true, + "notes": "Addresses review finding M-10. kernel-worker.ts line 984 currently hardcodes `return ENOSYS` for net_setsockopt. The kernel SocketTable already has a working setsockopt() implementation at socket-table.ts line ~464." + }, + { + "id": "US-044", + "title": "Implement SA_RESTART syscall restart logic", + "description": "As a developer, I need blocking syscalls to restart after a signal handler returns when SA_RESTART is set, matching POSIX behavior.", + "acceptanceCriteria": [ + "In packages/core/src/kernel/socket-table.ts: recv() and accept() check for pending signals during blocking waits", + "When a signal interrupts a blocking recv/accept and the handler has SA_RESTART: the syscall transparently restarts (re-enters the wait loop)", + "When a signal interrupts a blocking recv/accept and the handler does NOT have SA_RESTART: the syscall returns EINTR error", + "Add tests to packages/core/test/kernel/signal-handlers.test.ts: (1) SA_RESTART recv restarts after signal, (2) no SA_RESTART recv returns EINTR, (3) SA_RESTART accept restarts after signal", + "Tests pass", + "Typecheck passes" + ], + "priority": 44, + "passes": true, + "notes": "Addresses review finding H-4. US-020 defined SA_RESTART constant (0x10000000) and stores it on signal handlers, but no blocking syscall checks it. EINTR error code was added to KernelErrorCode 'for future SA_RESTART integration' — this story does that integration." + }, + { + "id": "US-045", + "title": "Implement O_NONBLOCK enforcement in socket operations", + "description": "As a developer, I need socket operations to respect the nonBlocking flag so non-blocking I/O works correctly.", + "acceptanceCriteria": [ + "In socket-table.ts: recv() on a socket with nonBlocking=true returns EAGAIN immediately when readBuffer is empty (instead of waiting)", + "In socket-table.ts: accept() on a socket with nonBlocking=true returns EAGAIN immediately when backlog is empty", + "In socket-table.ts: connect() on a socket with nonBlocking=true to an external address returns EINPROGRESS", + "Add setsockopt or fcntl-style method to toggle nonBlocking flag on an existing socket", + "Add tests to packages/core/test/kernel/socket-flags.test.ts: (1) nonBlocking recv returns EAGAIN, (2) nonBlocking accept returns EAGAIN, (3) toggle nonBlocking via setsockopt/fcntl", + "Tests pass", + "Typecheck passes" + ], + "priority": 45, + "passes": true, + "notes": "Addresses review finding M-7. The nonBlocking field exists on KernelSocket (line ~116) and is initialized to false (line ~189) but is never read by recv/accept/connect. Spec section 4.7 describes the expected O_NONBLOCK behavior." + }, + { + "id": "US-046", + "title": "Implement backlog limit and loopback port 0 ephemeral assignment", + "description": "As a developer, I need listen() to enforce backlog limits and bind() to support port 0 for loopback sockets.", + "acceptanceCriteria": [ + "In socket-table.ts listen(): use the backlogSize parameter (currently prefixed with _ and unused at line ~297) to cap the backlog array length", + "When backlog is full, new loopback connections get ECONNREFUSED", + "In socket-table.ts bind(): when port is 0, assign an ephemeral port from range 49152-65535 that is not already in the listeners map", + "After ephemeral port assignment, socket.localAddr.port reflects the assigned port (not 0)", + "Add tests to packages/core/test/kernel/socket-table.test.ts: (1) listen with backlog=2, connect 3 times, 3rd gets ECONNREFUSED, (2) bind port 0 assigns ephemeral port, (3) two bind port 0 get different ports", + "Tests pass", + "Typecheck passes" + ], + "priority": 46, + "passes": true, + "notes": "Addresses review findings M-9 (backlog overflow) and M-8 (port 0). Both are small changes in socket-table.ts combined into one story." + }, + { + "id": "US-047", + "title": "Add getLocalAddr/getRemoteAddr methods and WasmVM getsockname/getpeername", + "description": "As a developer, I need formal SocketTable accessor methods and WasmVM WASI extensions so C programs can call getsockname()/getpeername().", + "acceptanceCriteria": [ + "Add SocketTable.getLocalAddr(socketId): SockAddr method that returns socket.localAddr (throws EBADF if socket doesn't exist)", + "Add SocketTable.getRemoteAddr(socketId): SockAddr method that returns socket.remoteAddr (throws ENOTCONN if not connected)", + "Add net_getsockname and net_getpeername to host_net module in native/wasmvm/crates/wasi-ext/src/lib.rs", + "Add safe Rust wrappers following existing pattern", + "kernel-worker.ts: add net_getsockname and net_getpeername import handlers that call kernel.socketTable.getLocalAddr/getRemoteAddr via RPC", + "driver.ts: add kernelSocketGetLocalAddr and kernelSocketGetRemoteAddr RPC handlers", + "Add C implementations in sysroot patch: getsockname() calls __host_net_getsockname, getpeername() calls __host_net_getpeername", + "Add test: kernel socket after connect has correct localAddr and remoteAddr", + "Tests pass", + "Typecheck passes" + ], + "priority": 47, + "passes": true, + "notes": "Addresses review finding H-9. Data is already accessible via socketTable.get(id).localAddr but formal methods and WasmVM WASI extensions are missing. Follows existing WASI extension pattern: Rust extern → kernel-worker handler → driver RPC." + }, + { + "id": "US-048", + "title": "Wire InodeTable into VFS for deferred unlink and real nlink/ino", + "description": "As a developer, I need the InodeTable integrated into the VFS so stat() returns real inode numbers, hard links work, and unlinked-but-open files persist until last FD closes.", + "acceptanceCriteria": [ + "KernelImpl constructor creates an InodeTable instance and exposes it as kernel.inodeTable", + "In packages/core/src/shared/in-memory-fs.ts: each file/directory gets an inode via inodeTable.allocate() on creation", + "stat() returns the inode's ino number instead of a hash or 0", + "stat() returns the inode's nlink count instead of hardcoded 1", + "In in-memory-fs.ts removeFile(): when file has open FDs (openRefCount > 0), remove directory entry but keep data — file disappears from listings but stays readable via open FDs", + "When last FD to an unlinked file closes (decrementOpenRefs → shouldDelete=true), data is deleted", + "fdOpen() calls inodeTable.incrementOpenRefs(ino), fdClose() calls inodeTable.decrementOpenRefs(ino)", + "Add tests to packages/core/test/kernel/inode-table.test.ts: (1) stat returns real ino, (2) unlink with open FD keeps data, (3) close last FD deletes data, (4) nlink increments on hard link", + "Tests pass", + "Typecheck passes" + ], + "priority": 48, + "passes": true, + "notes": "InodeTable was created by US-002 with full allocate/incrementLinks/decrementLinks/shouldDelete logic but was never wired into the kernel or VFS. in-memory-fs.ts removeFile() at line ~201 immediately deletes with no refcounting. stat() returns hardcoded nlink:1 at line ~152." + }, + { + "id": "US-049", + "title": "Add '.' and '..' entries to readdir", + "description": "As a developer, I need readdir to include '.' and '..' entries to match POSIX behavior.", + "acceptanceCriteria": [ + "In packages/core/src/shared/in-memory-fs.ts listDirEntries(): prepend '.' (self) and '..' (parent) to the entry list before returning real entries", + "'.' entry has the directory's own inode number (if InodeTable is wired) and type DT_DIR", + "'..' entry has the parent directory's inode number and type DT_DIR; for root '/' the parent is itself", + "Existing readdir tests still pass (they may need updating if they assert exact entry counts)", + "Add test: readdir('/tmp') includes '.', '..', and any files in /tmp", + "Add test: readdir('/') has '..' pointing to itself", + "Tests pass", + "Typecheck passes" + ], + "priority": 49, + "passes": true, + "notes": "in-memory-fs.ts listDirEntries() at lines ~43-74 builds entries from the files/dirs Maps but never adds '.' or '..'. Many POSIX programs and test suites expect these." + }, + { + "id": "US-050", + "title": "Implement O_EXCL and O_TRUNC in kernel fdOpen", + "description": "As a developer, I need O_EXCL and O_TRUNC flags honored by fdOpen so file creation and truncation match POSIX semantics.", + "acceptanceCriteria": [ + "In packages/core/src/kernel/kernel.ts or fd-table.ts: when O_CREAT | O_EXCL is set and the file already exists, return EEXIST error", + "When O_TRUNC is set and the file exists, truncate file contents to zero bytes on open", + "O_EXCL without O_CREAT is ignored (POSIX behavior)", + "Add tests: (1) O_CREAT|O_EXCL on new file succeeds, (2) O_CREAT|O_EXCL on existing file returns EEXIST, (3) O_TRUNC truncates existing file, (4) O_TRUNC on new file with O_CREAT creates empty file", + "Tests pass", + "Typecheck passes" + ], + "priority": 50, + "passes": true, + "notes": "O_EXCL (0o200) and O_TRUNC (0o1000) are defined as constants in types.ts but fdOpen never checks them. The open() method in fd-table.ts line ~91 only handles O_CLOEXEC." + }, + { + "id": "US-051", + "title": "Implement blocking flock with WaitQueue", + "description": "As a developer, I need flock() to block when a conflicting lock is held instead of returning EAGAIN, using the kernel's WaitQueue.", + "acceptanceCriteria": [ + "In packages/core/src/kernel/file-lock.ts: add a WaitQueue (from kernel/wait.ts) to each lock entry", + "When flock() detects a conflict and nonBlocking is false, enqueue a WaitHandle and await it instead of returning EAGAIN", + "When a lock is released (unlock), wake one waiter from the WaitQueue so the next flock() caller acquires the lock", + "Blocking flock with a timeout: use WaitHandle timeout to implement POSIX-like behavior", + "Non-blocking flock (LOCK_NB) still returns EAGAIN immediately on conflict", + "Add tests: (1) process A holds exclusive lock, process B flock() blocks until A unlocks, (2) LOCK_NB returns EAGAIN, (3) multiple waiters are served FIFO", + "Tests pass", + "Typecheck passes" + ], + "priority": 51, + "passes": true, + "notes": "file-lock.ts line ~60 currently throws EAGAIN on conflict even when nonBlocking is false, with comment 'Blocking not implemented'. WaitQueue from US-001 is the intended mechanism." + }, + { + "id": "US-052", + "title": "Implement blocking pipe write with WaitQueue", + "description": "As a developer, I need pipe write() to block when the buffer is full instead of returning EAGAIN, using the kernel's WaitQueue.", + "acceptanceCriteria": [ + "In packages/core/src/kernel/pipe-manager.ts: add writeWaiters WaitQueue to pipe state", + "When write() detects buffer full (currentSize + data.length > MAX_PIPE_BUFFER_BYTES) and pipe is blocking, enqueue a WaitHandle and await it instead of returning EAGAIN", + "When read() consumes data from the buffer, wake one write waiter so the blocked write can proceed", + "Non-blocking pipes (O_NONBLOCK) still return EAGAIN immediately when buffer is full", + "Partial writes: if only N bytes fit, write N bytes, wake reader, then block for the remainder", + "Add tests: (1) write to full pipe blocks until reader drains, (2) non-blocking pipe write returns EAGAIN, (3) partial write then block", + "Tests pass", + "Typecheck passes" + ], + "priority": 52, + "passes": true, + "notes": "pipe-manager.ts lines ~106-108 return EAGAIN when buffer is full regardless of blocking mode. WaitQueue from US-001 is the intended mechanism. Read waiters already exist (readWaiters) but write waiters do not." + }, + { + "id": "US-053", + "title": "Implement true poll timeout -1 infinite blocking", + "description": "As a developer, I need poll() with timeout -1 to block indefinitely until an FD becomes ready, not cap at 30 seconds.", + "acceptanceCriteria": [ + "In packages/wasmvm/src/driver.ts netPoll handler: when timeout < 0, loop with WaitQueue waits instead of capping at 30s", + "Each iteration checks all polled FDs for readiness; if none ready, re-enter wait", + "When any polled FD becomes ready (data arrives, connection accepted, pipe written), the wait is woken", + "poll() with timeout 0 still returns immediately (non-blocking poll)", + "poll() with timeout > 0 still uses the specified timeout in milliseconds", + "Add test to packages/wasmvm/test/: poll with timeout -1 on a pipe, write to pipe from another process, verify poll returns", + "Tests pass", + "Typecheck passes" + ], + "priority": 53, + "passes": true, + "notes": "driver.ts line ~1136 sets waitMs=30000 when timeout<0. This means long-running WASM programs using poll(-1) will spuriously wake every 30s. The fix should use WaitQueue wake notifications from socket/pipe data arrival." + }, + { + "id": "US-054", + "title": "Populate /proc filesystem with basic entries", + "description": "As a developer, I need /proc populated with standard entries so programs that read /proc/self/* work correctly.", + "acceptanceCriteria": [ + "In packages/core/src/kernel/kernel.ts: populate /proc during kernel init with a proc device layer", + "/proc/self is a symlink-like entry that resolves to /proc/", + "/proc/self/fd/ lists open file descriptors for the current process (from kernel ProcessFDTable)", + "/proc/self/exe is a symlink or readable entry returning the process binary path", + "/proc/self/cwd contains the current working directory path", + "/proc/self/environ contains environment variables (or empty if sandboxed)", + "Reading /proc/self/fd/ returns info about that FD", + "Add tests: (1) readdir /proc/self/fd returns open FD numbers, (2) readlink /proc/self/fd/0 returns stdin path, (3) readFile /proc/self/cwd returns cwd", + "Tests pass", + "Typecheck passes" + ], + "priority": 54, + "passes": true, + "notes": "kernel.ts line ~148 creates /proc as an empty directory. No proc entries are populated. Programs that check /proc/self/fd or /proc/self/cwd fail. This needs a virtual device layer that generates content dynamically from kernel state." + }, + { + "id": "US-055", + "title": "Implement SA_RESETHAND (one-shot signal handler)", + "description": "As a developer, I need SA_RESETHAND support so signal handlers can be automatically reset to SIG_DFL after first invocation.", + "acceptanceCriteria": [ + "Add SA_RESETHAND constant (0x80000000) to packages/core/src/kernel/types.ts alongside existing SA_RESTART", + "In process-table.ts signal delivery: when handler has SA_RESETHAND flag, reset handler to SIG_DFL after invoking it once", + "sigaction() accepts SA_RESETHAND flag and stores it on the handler", + "SA_RESETHAND + SA_RESTART can be combined (both flags honored)", + "Add tests to packages/core/test/kernel/signal-handlers.test.ts: (1) handler with SA_RESETHAND fires once then reverts to default, (2) second delivery of same signal uses default action, (3) SA_RESETHAND | SA_RESTART works", + "Tests pass", + "Typecheck passes" + ], + "priority": 55, + "passes": true, + "notes": "SA_RESETHAND is a POSIX sigaction flag for one-shot handlers. The spec lists it alongside SA_RESTART. US-020 implemented sigaction but only SA_RESTART flag — SA_RESETHAND was missed." + }, + { + "id": "US-056", + "title": "Finish Node.js ESM parity for exec(), import conditions, and dynamic import failures", + "description": "As a developer, I need SecureExec's Node runtime to execute ESM entrypoints with Node-like semantics so package exports, type=module, built-in ESM imports, and dynamic import all behave correctly inside the sandbox.", + "acceptanceCriteria": [ + "Verify and keep passing the ESM runtime-driver tests for: package exports/import entrypoints, deep ESM import chains, 1000-module graphs, package type module .js entrypoints, Node built-in ESM imports, and dynamic import success paths", + "exec(code, { filePath: '/entry.mjs' }) runs the entry as ESM instead of compiling it as CommonJS", + "ESM resolution uses import conditions for V8 module loading, while require() inside the same execution still uses require conditions", + "Built-in ESM imports like node:fs and node:path expose both default and named exports", + "Dynamic import success paths pass in sandbox for relative .mjs modules, including namespace caching on repeated imports", + "Dynamic import error paths pass in sandbox for missing module, syntax error, and evaluation error cases with non-zero exit codes and preserved error messages", + "Run pnpm exec vitest run tests/runtime-driver/node/index.test.ts with the ESM/dynamic-import-focused filter and record the first concrete failing case if any remain", + "Typecheck passes" + ], + "priority": 56, + "passes": true, + "notes": "Verified in this branch on 2026-03-24: the focused runtime-driver slice now passes for ESM entry execution, package exports, type=module .js entrypoints, built-in ESM imports, successful dynamic imports, and dynamic-import missing-module/syntax/evaluation error paths. The remaining gap was closed by propagating async entrypoint rejections through the native V8 exec path and resolving dynamic imports with import conditions." + }, + { + "id": "US-057", + "title": "Fix top-level await semantics for Node.js ESM execution", + "description": "As a developer, I need top-level await in sandboxed ESM to block execution until completion so modules with long async startup behave like Node.js.", + "acceptanceCriteria": [ + "Add focused runtime-driver coverage for top-level await in entry modules and transitive imported modules", + "An ESM entrypoint with top-level await does not return early before the awaited work completes", + "Dynamic import of a module that contains top-level await waits for that module's completion before resolving", + "Long-running awaited work respects cpuTimeLimitMs and surfaces timeout errors correctly", + "Document the final behavior in docs-internal/friction.md and remove or update the existing top-level-await friction note when fixed", + "Run the targeted top-level-await tests through the SecureExec sandbox, not host Node.js", + "Typecheck passes" + ], + "priority": 57, + "passes": true, + "notes": "This is the follow-up for the long-standing 'ESM + top-level await can return early' runtime gap. The user request said 'top-level weights'; treated here as top-level await." + } + ] +} diff --git a/scripts/ralph/archive/2026-03-25-kernel-consolidation/progress.txt b/scripts/ralph/archive/2026-03-25-kernel-consolidation/progress.txt new file mode 100644 index 00000000..f9451399 --- /dev/null +++ b/scripts/ralph/archive/2026-03-25-kernel-consolidation/progress.txt @@ -0,0 +1,1268 @@ +## Codebase Patterns +- Native V8 `StreamEvent` payloads are not always V8-serialized; `native/v8-runtime/src/stream.rs` must fall back to UTF-8 JSON/string decoding or timer/stream dispatch callbacks can stall silently +- Keep `MODULE_RESOLVE_STATE` alive until async ESM execution fully finalizes; native top-level await plus dynamic `import()` still needs the bridge context and module cache after `execute_module()` first returns +- `packages/v8/src/runtime.ts` prefers `native/v8-runtime/target/release/secure-exec-v8` over debug builds, so rebuild the release binary before validating native V8 runtime changes through package tests +- After editing `packages/core/isolate-runtime/src/inject/*`, regenerate `packages/core/src/generated/isolate-runtime.ts` via `node packages/nodejs/scripts/build-isolate-runtime.mjs` before running Node runtime tests +- Bridge handler callbacks that need optional dispatch arguments should accept them explicitly; do not inspect extra bridge-call args through `arguments` inside arrow functions +- In `ProcessTable` signal delivery, apply one-shot disposition resets before `deliverPendingSignals()` so a same-signal delivery queued during the handler observes `SIG_DFL` instead of reusing the old callback +- Keep procfs state canonical in `packages/core/src/kernel/proc-layer.ts` as `/proc/` entries, and resolve `/proc/self` only in per-process runtime/VFS adapters where the current PID is known +- Cross-package tests that import workspace packages like `@secure-exec/core` execute the built `dist` output; rebuild the changed package with `pnpm turbo run build --filter=` before Vitest runs or you'll exercise stale JS +- `FileLockManager.flock()` is async; keep blocking advisory locks bounded with a timed `WaitQueue` retry loop and wake the next waiter from every last-reference unlock path +- For bounded blocking producers like `PipeManager.write()`, commit any bytes that fit before enqueueing a `WaitQueue`, and wake blocked writers from both drain paths and close paths so waits cannot hang +- `KernelInterface.fdOpen()` is synchronous, so open-time file semantics must go through sync-capable VFS hooks threaded through device/permission wrappers instead of async read/write fallbacks +- When `InMemoryFileSystem` exposes POSIX-only `.` / `..` directory entries, keep Node semantics by filtering them in `packages/nodejs/src/bridge-handlers.ts` before they reach `fs.readdir()` +- Kernel-owned `InMemoryFileSystem` instances must be rebound to `kernel.inodeTable` via `setInodeTable(...)` before device/permission wrapping; deferred-unlink FD I/O should use raw inode helpers (`readFileByInode`, `writeFileByInode`, `statByInode`) instead of pathname lookups +- `PtyManager` raw-mode bulk input still applies `icrnl`; translate the whole chunk before `deliverInput()` so oversized writes fail atomically with `EAGAIN` instead of partially buffering data +- Deferred unlink in `InMemoryFileSystem` must keep only live path → inode entries; open FDs survive unlink via `FileDescription.inode` and inode-backed reads, not by leaving removed pathnames accessible +- Any open-FD file I/O path in `KernelImpl` must stay description-based (`readDescriptionFile` / `writeDescriptionFile` / `preadDescription`) rather than path-based VFS calls, or deferred-unlink behavior regresses for `pread`/`pwrite`-style operations +- `SocketTable.connect()` must accept sockets already in `bound` state so WasmVM/libc callers can bind first, then use `getsockname()`/`getpeername()` with stable local addresses +- When `SocketTable.bind()` assigns a kernel ephemeral port for `port: 0`, keep a `requestedEphemeralPort` marker on the socket so external `listen(..., { external: true })` can still delegate `port: 0` to the host adapter before rewriting `localAddr` to the real host-assigned port +- Signal-aware blocking socket waits should use `ProcessSignalState.signalWaiters` plus `deliverySeq/lastDeliveredFlags`; wire `SocketTable` with `getSignalState` from the shared `ProcessTable` instead of open-coding runtime-specific signal polling +- Non-blocking external socket connect should reject with `EINPROGRESS` immediately but leave the kernel socket in a transient `connecting` state and finish `hostAdapter.tcpConnect()` in the background +- WasmVM `host_net` socket/domain constants coming from wasi-libc bottom-half do not match `packages/core` socket constants; normalize them at the WasmVM driver boundary before calling `kernel.socketTable` +- WasmVM `host_net` socket option payloads cross the worker RPC boundary as little-endian byte buffers; decode/encode them in `packages/wasmvm/src/driver.ts` and keep `packages/wasmvm/src/kernel-worker.ts` as a thin memory marshal layer +- In `packages/wasmvm/src/kernel-worker.ts`, socket FDs must be allocated in the worker-local `FDTable` and mapped through `localToKernelFd` — returning raw kernel socket IDs collides with stdio FDs and breaks close/flush behavior +- Cooperative WasmVM signal delivery during `poll_oneoff` sleep needs a periodic hook back through RPC; pure `Atomics.wait()` sleeps do not observe pending kernel signals +- When adding bridge globals that are called directly from the bridge IIFE, update all three inventories together: `packages/*/src/bridge-contract.ts`, `packages/core/src/shared/global-exposure.ts`, and `native/v8-runtime/src/session.rs` (`SYNC_BRIDGE_FNS` / `ASYNC_BRIDGE_FNS`) +- In `native/v8-runtime`, sync bridge calls must only consume `BridgeResponse` frames for their own `call_id`; defer mismatched responses back to the session event loop or sync calls will steal async promise results +- Host-side loopback access for sandbox HTTP servers is gated through `createDefaultNetworkAdapter().__setLoopbackPortChecker(...)`; keep the checker aligned with the active kernel-backed HTTP server set rather than reviving driver-level owned-port maps +- Standalone `NodeExecutionDriver` already provisions an internal `SocketTable` with `createNodeHostNetworkAdapter()`; do not reintroduce `NetworkAdapter.httpServerListen/httpServerClose` for loopback server tests — use sandbox `http.createServer()` plus `initialExemptPorts` or the loopback checker hook when a host-side request must reach the sandbox listener +- Node's default network adapter exposes an internal `__setLoopbackPortChecker` hook; NodeExecutionDriver must wire it before `wrapNetworkAdapter()` so host-side fetch/httpRequest can reach kernel-owned loopback listeners without reviving `ownedServerPorts` +- For new Node bridge operations that need kernel-backed host state but not a new native bridge function, route them through `_loadPolyfill` `__bd:` dispatch handlers; reserve new runtime globals for host-to-isolate event dispatch like `_timerDispatch` +- Kernel implementation lives in packages/core/src/kernel/ — KernelImpl is the main class +- UDP and TCP use separate binding maps in SocketTable (listeners for TCP, udpBindings for UDP) — same port can be used by both protocols +- Kernel tests go in packages/core/test/kernel/ +- WasmVM WASI extensions are declared in native/wasmvm/crates/wasi-ext/src/lib.rs +- C sysroot patches for WasmVM are in native/wasmvm/patches/wasi-libc/ +- WasmVM kernel worker is packages/wasmvm/src/kernel-worker.ts, driver is packages/wasmvm/src/driver.ts +- Node.js bridge is in packages/nodejs/src/bridge/, driver in packages/nodejs/src/driver.ts +- Bridge handlers not in the Rust V8 SYNC_BRIDGE_FNS array are dispatched through _loadPolyfill via BRIDGE_DISPATCH_SHIM in execution-driver.ts +- To add new bridge globals: (1) add key to HOST_BRIDGE_GLOBAL_KEYS in bridge-contract.ts, (2) add handler to dispatch handlers in execution-driver.ts, (3) use _globalName.applySyncPromise(undefined, args) in bridge code +- FD table is managed on the host side via kernel ProcessFDTable (FDTableManager from @secure-exec/core) — bridge/fs.ts delegates FD ops through bridge dispatch +- After modifying bridge/fs.ts, run `pnpm turbo run build --filter=@secure-exec/nodejs` to rebuild the bridge IIFE before running tests +- Node conformance tests are in packages/secure-exec/tests/node-conformance/ +- PATCHED_PROGRAMS in native/wasmvm/c/Makefile must include programs using host_process or host_net imports +- DnsCache is in packages/core/src/kernel/dns-cache.ts, exported from index.ts; uses lazy TTL expiry on lookup +- Use vitest for tests, pnpm for package management, turbo for builds +- The spec for this work is at docs-internal/specs/kernel-consolidation.md +- WaitHandle and WaitQueue are exported from packages/core/src/kernel/wait.ts and re-exported from index.ts +- Run tests from repo root with: pnpm vitest run +- Run typecheck from package dir with: pnpm tsc --noEmit +- InodeTable is in packages/core/src/kernel/inode-table.ts, exported from index.ts +- Host adapter interfaces (HostNetworkAdapter, HostSocket, etc.) are in packages/core/src/kernel/host-adapter.ts, type-exported from index.ts +- SocketTable is in packages/core/src/kernel/socket-table.ts, exported from index.ts along with KernelSocket type and socket constants (AF_INET, SOCK_STREAM, etc.) +- SocketTable has a private `listeners` Map (addr key → socket ID) for port reservation and routing; addrKey() is exported for address key formatting +- findListener() checks exact match first, then wildcard 0.0.0.0 and :: — used by connect() for loopback routing +- findBoundUdp() is public on SocketTable — same lookup pattern as findListener but for UDP bindings; used by tests to poll for UDP server readiness +- EADDRINUSE was added to KernelErrorCode in types.ts for socket address conflicts +- connect() creates a server-side socket paired via peerId and queues it in listener's backlog; send/recv use peerId to route data +- destroySocket() clears peerId on peer and wakes its readWaiters for EOF propagation +- consumeFromBuffer() handles partial chunk reads for recv() with maxBytes limit +- ECONNREFUSED and ENOTCONN were added to KernelErrorCode in types.ts +- Half-close uses peerWriteClosed flag on KernelSocket — shutdown('write') sets it on the peer, recv() checks it for EOF detection +- State composition: shutdown methods check current state (read-closed/write-closed) and transition to closed when both halves are shut +- Socket options use optKey(level, optname) → "level:optname" composite keys in the options Map; use setsockopt/getsockopt methods, not direct Map access +- Socket flags (MSG_PEEK, MSG_DONTWAIT, MSG_NOSIGNAL) are bitmask values matching Linux constants; use bitwise AND to check +- SocketTable accepts optional `networkCheck` in constructor for permission enforcement; loopback connect always bypasses checks +- KernelSocket has `external?: boolean` flag for tracking host-adapter-connected sockets (used by send() permission check) +- SocketTable accepts optional `hostAdapter` (HostNetworkAdapter) in constructor for external connection routing +- connect() is async (returns Promise) — all existing tests must use await; loopback path is synchronous inside the async function +- External sockets have `hostSocket?: HostSocket` on KernelSocket — send() writes to hostSocket, a background read pump feeds readBuffer +- destroySocket() calls hostSocket.close() for external sockets +- Mock host adapter pattern: MockHostSocket with pushData()/pushEof() for controlling read pump in tests +- MockHostListener with pushConnection() for simulating incoming external TCP connections in tests +- bind() is async (Promise) like connect() and listen() — all callers must await; sync throw tests use .rejects.toThrow() +- SocketTable accepts optional `vfs` (VirtualFileSystem) in constructor for Unix domain socket file management +- InMemoryFileSystem.chmod() accepts explicit type bits (e.g. S_IFSOCK | 0o755) — if mode & 0o170000 is non-zero, type bits are used directly +- listen() is async (Promise) — all callers must use await; expect(...).toThrow must become await expect(...).rejects.toThrow +- resource-exhaustion.test.ts and kernel-integration.test.ts stdin streaming tests have pre-existing flaky failures — not related to socket work +- Net socket bridge handlers support kernel routing via optional socketTable + pid deps; fallback to direct net.Socket when not provided +- KernelOptions accepts optional hostNetworkAdapter — wired to SocketTable for external connection routing +- KernelInterface exposes socketTable — available to runtime drivers via init(kernel) callback +- SocketTable.close() requires BOTH socketId AND pid for per-process ownership check +- NodeExecutionDriverOptions accepts optional socketTable + pid for kernel socket routing +- NetworkAdapter interface no longer has netSocket* methods — bridge handlers handle all TCP socket operations +- buildNetworkBridgeHandlers returns { handlers, dispose } (NetworkBridgeResult) — kernel HTTP servers need async cleanup +- http.Server + emit('connection', duplexStream) pattern feeds kernel socket data through Node HTTP parser without real TCP +- KernelSocketDuplex wraps kernel sockets as stream.Duplex — needs socket-like props (remoteAddress, setNoDelay, etc.) for http module +- SSRF loopback exemption uses socketTable.findListener() — kernel-aware, no manual port tracking needed +- assertNotPrivateHost/isPrivateIp/isLoopbackHost are in bridge-handlers.ts for kernel-aware SSRF validation +- processTable exposed on KernelInterface — wired through execution-driver to bridge handlers +- wrapAsDriverProcess() adapts SpawnedProcess to kernel DriverProcess (adds null callback stubs) +- childProcessInstances Map in bridge/child-process.ts is event routing only — kernel tracks process state +- WasmVM socket ops route through kernel.socketTable (create/connect/send/recv/close) — hostAdapter handles real TCP +- WasmVM TLS-upgraded sockets bypass kernel recv via _tlsSockets Map — TLS upgrade detaches kernel read pump +- WaitHandle timeout goes in WaitQueue.enqueue(timeoutMs), not WaitHandle.wait() — wait() takes no args +- Test mock kernel: createMockKernel() with SocketTable + TestHostSocket using real node:net — in packages/wasmvm/test/net-socket.test.ts +- Cooperative signal delivery: driver piggybacking via SIG_IDX_PENDING_SIGNAL in SAB, worker calls __wasi_signal_trampoline +- proc_sigaction RPC: action 0=SIG_DFL, 1=SIG_IGN, 2=user handler (C side holds function pointer) +- C sysroot signal handling: signal() + __wasi_signal_trampoline in 0011-sigaction.patch +- Kernel public API: Kernel interface has no kill(pid,signal) — use ManagedProcess.kill() from spawn(), or kernel.processTable internally + +## 2026-03-24 22:12 PDT - US-050 +- What was implemented +- Added synchronous open-time flag handling in `KernelImpl.fdOpen()` for `O_CREAT`, `O_EXCL`, and `O_TRUNC`, with wrapper passthroughs in the device and permission layers +- Added `prepareOpenSync()` support to the in-memory and Node-backed VFS adapters so `fdOpen()` can create empty files, reject `O_CREAT|O_EXCL` on existing paths, and truncate existing files before the descriptor is allocated +- Added kernel integration coverage for `O_CREAT|O_EXCL`, `O_TRUNC`, `O_TRUNC|O_CREAT`, and the `O_EXCL`-without-`O_CREAT` no-op case; updated the kernel contract and root agent instructions with the sync-open rule +- Files changed +- `.agent/contracts/kernel.md` +- `CLAUDE.md` +- `packages/core/src/kernel/device-layer.ts` +- `packages/core/src/kernel/kernel.ts` +- `packages/core/src/kernel/permissions.ts` +- `packages/core/src/shared/in-memory-fs.ts` +- `packages/core/test/kernel/helpers.ts` +- `packages/core/test/kernel/kernel-integration.test.ts` +- `packages/nodejs/src/driver.ts` +- `packages/nodejs/src/module-access.ts` +- `packages/nodejs/src/os-filesystem.ts` +- `scripts/ralph/prd.json` +- `scripts/ralph/progress.txt` +- **Learnings for future iterations:** +- Patterns discovered +- `fdOpen()` now depends on `prepareOpenSync()` passthroughs; if a filesystem gets wrapped and drops that hook, `O_CREAT`/`O_EXCL`/`O_TRUNC` will silently regress back to lazy-open behavior +- Gotchas encountered +- Once `O_CREAT` starts materializing files at open time, deferred umask handling can no longer key off a read miss in `vfsWrite()`; it has to key off the descriptor’s `creationMode` marker instead +- Useful context +- Validation for this story passed with `pnpm tsc --noEmit -p packages/core/tsconfig.json`, `pnpm tsc --noEmit -p packages/nodejs/tsconfig.json`, `pnpm vitest run packages/core/test/kernel/kernel-integration.test.ts -t "O_CREAT|O_EXCL|O_TRUNC|umask"`, `pnpm vitest run packages/core/test/kernel/inode-table.test.ts`, and `pnpm vitest run packages/core/test/kernel/unix-socket.test.ts` +--- + +## 2026-03-25 00:09 PDT - US-057 +- What was implemented +- Fixed native V8 ESM top-level-await finalization so entry modules stay pending until their evaluation promise settles, including timer-driven async startup and transitive async imports +- Added native dynamic `import()` handling for ESM via V8's host dynamic-import callback, reusing the existing module resolver/cache and mapping async evaluation back to the imported module namespace +- Fixed native stream-event payload decoding to accept raw UTF-8 JSON/string payloads so kernel timer callbacks reach `_timerDispatch`, then added focused sandbox runtime-driver coverage for entrypoint TLA, transitive imported-module TLA, dynamic-import TLA, and timeout behavior +- Files changed +- `.agent/contracts/node-runtime.md` +- `docs-internal/friction.md` +- `native/v8-runtime/src/execution.rs` +- `native/v8-runtime/src/isolate.rs` +- `native/v8-runtime/src/session.rs` +- `native/v8-runtime/src/stream.rs` +- `packages/secure-exec/tests/runtime-driver/node/index.test.ts` +- `scripts/ralph/prd.json` +- `scripts/ralph/progress.txt` +- **Learnings for future iterations:** +- Patterns discovered +- Native V8 async ESM completion is a two-part problem: keep the entry-module promise alive across the session event loop, and keep module-resolution state alive long enough for later native dynamic imports to reuse the same bridge context/cache +- Host-to-isolate timer events are emitted as raw JSON bytes, not V8-serialized values; the native stream dispatcher has to parse both formats or TLA/timer flows will hang waiting for `_timerDispatch` +- Gotchas encountered +- The `v8` crate version in this workspace expects `set_host_import_module_dynamically_callback` handlers with a `HandleScope` signature, not the newer `Context`-first callback shape shown in newer crate docs +- Useful context +- Focused green checks for this story were `pnpm tsc --noEmit -p packages/secure-exec/tsconfig.json`, `cargo test execution::tests::v8_consolidated_tests -- --nocapture`, `cargo build --release` in `native/v8-runtime`, and `pnpm exec vitest run packages/secure-exec/tests/runtime-driver/node/index.test.ts -t "dynamic import|top-level await"` +--- + +## 2026-03-23 - US-001 +- Implemented WaitHandle and WaitQueue primitives in packages/core/src/kernel/wait.ts +- WaitHandle: Promise-based wait/wake with optional timeout, timedOut flag, isSettled guard +- WaitQueue: FIFO queue with enqueue(), wakeOne(), wakeAll(), pending count, clear() +- Files changed: packages/core/src/kernel/wait.ts (new), packages/core/src/kernel/index.ts (exports), packages/core/test/kernel/wait-queue.test.ts (new, 13 tests) +- **Learnings for future iterations:** + - Kernel managers follow a consistent pattern: private state Maps, KernelError throws, public methods + - Existing waiter pattern in PipeManager/ProcessTable uses raw resolver arrays — WaitQueue provides the unified replacement + - git add must be run from repo root, not a subdirectory + - Typecheck for core package: `cd packages/core && pnpm tsc --noEmit` +--- + +## 2026-03-23 - US-002 +- Implemented InodeTable with refcounting and deferred unlink in packages/core/src/kernel/inode-table.ts +- Inode struct: ino, nlink, openRefCount, mode, uid, gid, size, timestamps +- InodeTable: allocate, get, incrementLinks/decrementLinks, incrementOpenRefs/decrementOpenRefs, shouldDelete, delete +- Deferred deletion: nlink=0 with open FDs keeps inode alive until last FD closes +- Files changed: packages/core/src/kernel/inode-table.ts (new), packages/core/src/kernel/index.ts (exports), packages/core/test/kernel/inode-table.test.ts (new, 17 tests) +- **Learnings for future iterations:** + - InodeTable and Inode are exported from index.ts (InodeTable as value, Inode as type) + - Inode starts with nlink=1 on allocate (matching POSIX: creating a file = one directory entry) + - ctime is updated on link/unlink operations per POSIX + - KernelError codes available: ENOENT for missing inode, EINVAL for underflow guards +--- + +## 2026-03-23 - US-003 +- Implemented HostNetworkAdapter, HostSocket, HostListener, HostUdpSocket, DnsResult interfaces in packages/core/src/kernel/host-adapter.ts +- Added type exports to packages/core/src/kernel/index.ts +- Files changed: packages/core/src/kernel/host-adapter.ts (new), packages/core/src/kernel/index.ts (exports) +- **Learnings for future iterations:** + - Host adapter interfaces are type-only exports (no runtime code) — they live in kernel/host-adapter.ts + - DnsResult is a separate interface (address + family: 4|6) used by dnsLookup + - HostSocket.read() returns null for EOF, matching the kernel recv() convention + - HostListener.port is readonly — needed for ephemeral port (port 0) allocation +--- + +## 2026-03-23 - US-004 +- Implemented KernelSocket struct and SocketTable class in packages/core/src/kernel/socket-table.ts +- KernelSocket: id, domain, type, protocol, state, nonBlocking, localAddr, remoteAddr, options, pid, readBuffer, readWaiters, backlog, acceptWaiters, peerId +- SocketTable: create, get, close, poll, closeAllForProcess, disposeAll +- Per-process isolation: close checks pid ownership +- EMFILE limit: configurable maxSockets (default 1024) +- Socket address types: InetAddr, UnixAddr, SockAddr with type guards +- Files changed: packages/core/src/kernel/socket-table.ts (new), packages/core/src/kernel/index.ts (exports), packages/core/test/kernel/socket-table.test.ts (new, 23 tests) +- **Learnings for future iterations:** + - SocketTable follows the same pattern as InodeTable: private Map, nextId counter, requireSocket helper + - Socket state is mutable on the KernelSocket interface — higher-level operations (bind/listen/connect) set it directly + - KernelErrorCode type in types.ts needs EADDRINUSE, ECONNREFUSED, ECONNRESET, ENOTCONN, ENOTSOCK for later stories + - WaitQueue from wait.ts is used for readWaiters and acceptWaiters — close wakes all pending waiters + - backlog stores socket IDs (not KernelSocket objects) for later accept() implementation +--- + +## 2026-03-23 - US-005 +- Implemented bind(), listen(), accept(), findListener() on SocketTable +- Added private `listeners` Map for port reservation and routing +- Added EADDRINUSE to KernelErrorCode +- destroySocket now cleans up listener registrations; disposeAll clears listeners +- Wildcard address matching: findListener checks exact, then 0.0.0.0, then :: for the port +- EADDRINUSE checks wildcard overlap (0.0.0.0:P conflicts with 127.0.0.1:P and vice versa) +- SO_REUSEADDR on the binding socket bypasses EADDRINUSE +- addrKey() exported as module-level helper for "host:port" or unix path keys +- Files changed: packages/core/src/kernel/types.ts (EADDRINUSE), packages/core/src/kernel/socket-table.ts (bind/listen/accept/findListener), packages/core/src/kernel/index.ts (addrKey export), packages/core/test/kernel/socket-table.test.ts (21 new tests, 44 total) +- **Learnings for future iterations:** + - bind() registers in listeners map immediately (not just on listen) — this is for port reservation + - findListener() only matches sockets in 'listening' state, not just 'bound' + - isAddrInUse scans all listeners for wildcard overlap — O(n) but listener count is small + - accept() returns socket IDs from backlog; connect() (US-006) will push to backlog + - Tests can simulate backlog by directly pushing to socket.backlog array +--- + +## 2026-03-23 - US-006 +- Implemented loopback TCP routing: connect(), send(), recv() on SocketTable +- connect() finds listener via findListener(), creates paired server-side socket via peerId, queues in backlog +- send() writes to peer's readBuffer, wakes readWaiters +- recv() consumes from readBuffer with maxBytes limit, returns null for EOF (peer gone) or no data +- destroySocket() propagates EOF by clearing peerId on peer and waking readWaiters +- Added ECONNREFUSED and ENOTCONN to KernelErrorCode +- Files changed: packages/core/src/kernel/types.ts (ECONNREFUSED, ENOTCONN), packages/core/src/kernel/socket-table.ts (connect/send/recv/consumeFromBuffer, updated destroySocket), packages/core/test/kernel/loopback.test.ts (new, 21 tests) +- **Learnings for future iterations:** + - send() copies data (new Uint8Array(data)) to prevent caller mutations affecting kernel buffers + - consumeFromBuffer() handles partial chunk reads — splits a chunk if it exceeds maxBytes and puts remainder back + - EOF detection in recv: peerId === undefined means peer closed; readBuffer empty + peerId undefined → return null + - connect() creates the server-side socket with listener.pid as owner — the process that calls accept() gets that socket + - Tests should run from repo root: `pnpm vitest run `, not from package dir +--- + +## 2026-03-23 - US-007 +- Implemented shutdown() with half-close support on SocketTable +- shutdown('write'): sets peer's peerWriteClosed flag, peer recv() returns EOF, local send() returns EPIPE +- shutdown('read'): discards readBuffer, local recv() returns EOF immediately, local send() still works +- shutdown('both'): combines both, transitions to 'closed' +- Sequential half-close: read-closed + shutdown('write') → closed, write-closed + shutdown('read') → closed +- Updated send() to check write-closed/closed states before ENOTCONN +- Updated recv() to return null immediately for read-closed/closed states and check peerWriteClosed for EOF +- Updated poll() to reflect half-close: write-closed → writable=false, read-closed → writable=true +- Added peerWriteClosed flag to KernelSocket for tracking peer write shutdown without destroying the socket +- Files changed: packages/core/src/kernel/socket-table.ts (shutdown, shutdownWrite, shutdownRead, updated send/recv/poll, peerWriteClosed), packages/core/test/kernel/socket-shutdown.test.ts (new, 17 tests) +- **Learnings for future iterations:** + - Half-close needs a separate flag (peerWriteClosed) because the peer socket still exists — peerId check alone won't detect write shutdown + - shutdown('write') + shutdown('read') must compose: each checks current state and transitions to 'closed' if the other half is already closed + - send() must check write-closed/closed BEFORE checking connected — order matters for correct error code (EPIPE vs ENOTCONN) + - recv() on read-closed returns null without checking buffer — shutdown('read') discards unread data +--- + +## 2026-03-23 - US-008 +- Implemented socketpair() on SocketTable — creates two pre-connected sockets linked via peerId +- Both sockets start in 'connected' state, reusing existing send/recv/close/shutdown data paths +- Files changed: packages/core/src/kernel/socket-table.ts (socketpair method), packages/core/test/kernel/socketpair.test.ts (new, 13 tests) +- **Learnings for future iterations:** + - socketpair() is much simpler than connect() — no listener lookup, just create two sockets and cross-link peerId + - All existing send/recv/close/shutdown logic works unchanged for socketpair — the peerId linking is the only mechanism needed + - EMFILE limit applies to socketpair too — creating 2 sockets at once can exceed the limit after the first succeeds +--- + +## 2026-03-23 - US-009 +- Implemented setsockopt() and getsockopt() methods on SocketTable +- Added socket option constants: SOL_SOCKET, IPPROTO_TCP, SO_REUSEADDR, SO_KEEPALIVE, SO_RCVBUF, SO_SNDBUF, TCP_NODELAY + +- Added optKey() helper for canonical "level:optname" option keys +- Enforced SO_RCVBUF: send() throws EAGAIN when peer's readBuffer exceeds the limit +- Updated isAddrInUse() to use the new optKey format for SO_REUSEADDR check +- Updated existing tests that set SO_REUSEADDR directly on the options Map to use setsockopt() +- Files changed: packages/core/src/kernel/socket-table.ts (setsockopt/getsockopt, optKey, SO_RCVBUF enforcement, constants), packages/core/src/kernel/index.ts (new exports), packages/core/test/kernel/socket-table.test.ts (10 new tests, 54 total) +- **Learnings for future iterations:** + - Socket options use composite "level:optname" keys in the options Map — use optKey() helper, not raw string keys + - SO_RCVBUF enforcement is in send() on the peer socket, not recv() on the local socket — the peer's receive buffer is what gets checked + - When changing internal option key format, search all test files for direct options Map usage and update them + - resource-exhaustion.test.ts has pre-existing flaky failures unrelated to socket work +--- + +## 2026-03-23 - US-010 +- Implemented MSG_PEEK, MSG_DONTWAIT, MSG_NOSIGNAL socket flags +- MSG_PEEK: peekFromBuffer() reads data without consuming — returns a copy so mutations don't affect the buffer +- MSG_DONTWAIT: throws EAGAIN when no data available (but still returns null for EOF) +- MSG_NOSIGNAL: suppresses SIGPIPE — throws EPIPE with MSG_NOSIGNAL marker in message +- Flags are bitmask-combined (MSG_PEEK | MSG_DONTWAIT works) +- Files changed: packages/core/src/kernel/socket-table.ts (MSG constants, peekFromBuffer, recv/send flag handling), packages/core/src/kernel/index.ts (exports), packages/core/test/kernel/socket-flags.test.ts (new, 13 tests) +- **Learnings for future iterations:** + - peekFromBuffer() must return a copy (new Uint8Array) not a subarray reference — otherwise callers can corrupt the kernel buffer + - MSG_DONTWAIT should only throw EAGAIN when no data AND no EOF condition — EOF still returns null + - Linux MSG_* flag values: MSG_PEEK=0x2, MSG_DONTWAIT=0x40, MSG_NOSIGNAL=0x4000 — match Linux constants for compatibility +--- +## 2026-03-24 22:20 PDT - US-051 +- Implemented blocking advisory `flock()` with per-path `WaitQueue`s and bounded timed waits in `FileLockManager` +- Converted kernel `flock` to async `Promise` semantics and updated the core kernel contract for blocking/FIFO lock behavior +- Added coverage for blocking unlock wakeup, `LOCK_NB` conflict handling, FIFO waiter ordering, and adjusted kernel integration to keep the mock process alive while awaiting lock operations +- Files changed: `.agent/contracts/kernel.md`, `packages/core/src/kernel/file-lock.ts`, `packages/core/src/kernel/kernel.ts`, `packages/core/src/kernel/types.ts`, `packages/core/test/kernel/file-lock.test.ts` +- **Learnings for future iterations:** + - Async kernel syscalls can expose existing test timing races; `MockRuntimeDriver` needs `neverExit: true` when a test awaits multiple operations against the same PID + - For indefinite kernel waits, use timed `WaitQueue.enqueue(timeoutMs)` retries instead of a single forever-pending Promise so WasmVM/bridge callers can re-check state safely + - File-lock waiter wakeups must happen on all last-reference release paths (`LOCK_UN`, `fdClose`, `dup2` replacement, process-exit cleanup) because the kernel funnels them through `releaseByDescription()` + - `KernelInterface.flock()` now returns a `Promise`; direct tests and future bridge callers must `await` it even when the lock is uncontended +--- + +## 2026-03-23 - US-011 +- Implemented network permission checks in SocketTable: checkNetworkPermission() public method, wired into connect(), listen(), and send() +--- + +- connect() to loopback (kernel listener) always bypasses permission checks; external addresses check against configured policy +- listen() checks permission when networkCheck is configured +- send() checks permission for sockets marked as external (external flag on KernelSocket) +- Added `external?: boolean` to KernelSocket interface for host-adapter-connected socket tracking +- Files changed: packages/core/src/kernel/socket-table.ts (networkCheck option, checkNetworkPermission, connect/listen/send permission checks, external flag), packages/core/test/kernel/network-permissions.test.ts (new, 17 tests) +- **Learnings for future iterations:** + - SocketTable accepts `networkCheck` in constructor options — when set, listen() and external connect() are permission-checked + - Loopback connect (findListener returns a match) always bypasses permission — this is by design per spec + - When no networkCheck is configured, existing behavior is preserved (no enforcement) — backwards compatible + - Tests that need loopback with restricted policy must allow "listen" op but deny "connect" — denyAll breaks listener setup + - The `external` flag on KernelSocket will be set by US-012 (host adapter routing) — for now it's only used in tests + - resource-exhaustion.test.ts has pre-existing flaky failures — not related to socket/permission work +--- + +## 2026-03-23 - US-012 +- Implemented external connection routing via host adapter in SocketTable +- connect() is now async (Promise) — loopback path remains synchronous, external path awaits hostAdapter.tcpConnect() +- External sockets store hostSocket on KernelSocket; send() writes to hostSocket, background read pump feeds readBuffer +- destroySocket() calls hostSocket.close() for external sockets; closeAllForProcess propagates +- Permission check runs before host adapter call; loopback still bypasses +- Added MockHostSocket and MockHostNetworkAdapter for testing external connections +- Updated all existing test files to use async/await for connect() calls +- Files changed: packages/core/src/kernel/socket-table.ts (hostAdapter option, async connect, hostSocket on KernelSocket, send relay, startReadPump, destroySocket cleanup), packages/core/test/kernel/external-connect.test.ts (new, 14 tests), packages/core/test/kernel/loopback.test.ts (async), packages/core/test/kernel/network-permissions.test.ts (async), packages/core/test/kernel/socket-flags.test.ts (async), packages/core/test/kernel/socket-shutdown.test.ts (async), packages/core/test/kernel/socket-table.test.ts (async) +- **Learnings for future iterations:** + - Making connect() async is a breaking API change — all callers across test files must add await, test callbacks must be async + - In async functions, ALL throws become rejected Promises — try/catch without await won't catch errors; use `await expect(...).rejects.toThrow()` pattern + - The read pump runs as a fire-and-forget async loop — use pushData()/pushEof() on MockHostSocket to control timing +- When testing chunk ordering with the read pump, recv() with exact maxBytes is more reliable than assuming chunks arrive separately +- send() for external sockets fire-and-forgets the hostSocket.write() — errors are caught asynchronously and mark the socket broken +--- + +## 2026-03-24 21:39 PDT - US-048 +- Wired `KernelImpl` to own a shared `InodeTable`, bind it into `InMemoryFileSystem`, and keep open-file access alive after unlink by storing inode identity on `FileDescription` +- Refactored `packages/core/src/shared/in-memory-fs.ts` to use live path-to-inode maps plus inode-backed file storage so `stat()` returns real `ino`/`nlink`, hard links share inode state, and unlink removes pathnames without discarding open file data +- Added integration coverage in `packages/core/test/kernel/inode-table.test.ts` for real inode stats, deferred unlink with open FDs, last-close deletion, and hard-link `nlink` parity +- Updated the kernel contract and repo instructions with the deferred-unlink inode rule +- Files changed: `.agent/contracts/kernel.md`, `CLAUDE.md`, `packages/core/src/kernel/kernel.ts`, `packages/core/src/kernel/types.ts`, `packages/core/src/shared/in-memory-fs.ts`, `packages/core/test/kernel/inode-table.test.ts` +- Quality checks: `pnpm tsc --noEmit` passed in `packages/core`; `pnpm vitest run test/kernel/inode-table.test.ts` passed; full `pnpm vitest run` in `packages/core` failed in pre-existing `test/kernel/resource-exhaustion.test.ts` (`PTY adversarial stress > single large write (1MB+) — immediate EAGAIN, no partial buffering`, assertion at line 270) +- **Learnings for future iterations:** + - Deferred unlink must never keep removed pathnames reachable — regular path lookups should fail immediately, and only inode-backed FD I/O should survive until the last close + - Rebinding an existing `InMemoryFileSystem` into `KernelImpl` needs inode-table migration for pre-populated filesystems, because many tests create and seed the VFS before constructing the kernel + - Any kernel path that can implicitly close an FD (`fdClose`, `dup2`, stdio override cleanup, process-exit table teardown) must release inode open refs when the last shared `FileDescription` reference drops +--- + +## 2026-03-24 21:43 PDT - US-048 +- Patched `KernelImpl.fdPwrite()` to use inode-backed description helpers so positional writes still work after the pathname has been unlinked +- Added a regression test proving `fdPwrite` + `fdPread` continue to work on an unlinked open file while the path stays absent from the VFS +- Files changed: `packages/core/src/kernel/kernel.ts`, `packages/core/test/kernel/inode-table.test.ts`, `scripts/ralph/progress.txt` +- Quality checks: `pnpm tsc --noEmit` passed in `packages/core`; `pnpm vitest run test/kernel/inode-table.test.ts` passed; full `pnpm vitest run` in `packages/core` still fails in pre-existing `test/kernel/resource-exhaustion.test.ts` (`PTY adversarial stress > single large write (1MB+) — immediate EAGAIN, no partial buffering`, assertion at line 270) +- **Learnings for future iterations:** + - Deferred-unlink support is only correct if every FD-based read and write path goes through the `FileDescription.inode` helpers; a single direct `vfs.readFile`/`vfs.writeFile` call reintroduces pathname dependence + - Focused inode tests can pass while the broader package suite remains blocked by the unrelated PTY stress regression, so keep the full-suite command/result in the log for handoff clarity +--- + +## 2026-03-24 21:22 PDT - US-047 +- What was implemented +- Added `SocketTable.getLocalAddr()` / `getRemoteAddr()` and allowed `connect()` from `bound` sockets so bound clients can use address accessors cleanly +- Wired WasmVM address accessors end to end: `wasi-ext` host imports/wrappers, worker `host_net` handlers, driver RPC handlers, and libc `getsockname()` / `getpeername()` patching +- Added kernel/WasmVM tests plus `syscall_coverage` parity coverage entries for the new libc socket address calls +- Files changed +- `packages/core/src/kernel/socket-table.ts` +- `packages/core/test/kernel/socket-table.test.ts` +- `packages/wasmvm/src/driver.ts` +- `packages/wasmvm/src/kernel-worker.ts` +- `packages/wasmvm/test/net-socket.test.ts` +- `packages/wasmvm/test/c-parity.test.ts` +- `native/wasmvm/crates/wasi-ext/src/lib.rs` +- `native/wasmvm/patches/wasi-libc/0008-sockets.patch` +- `native/wasmvm/c/programs/syscall_coverage.c` +- `prd.json` +- **Learnings for future iterations:** +- Bound-client connect is required for libc parity: `getsockname()` on a client socket is only meaningful if `connect()` preserves a prior `bind()` +- The WasmVM address-accessor path should reuse the existing serialized address format (`host:port` or unix path) so libc parsing can keep using the shared `string_to_sockaddr()` helper +- When adding a new `host_net` import, update all four layers together: `wasi-ext` externs/wrappers, `kernel-worker` imports, `driver` RPC handlers, and the wasi-libc patch +- `syscall_coverage` is the right place to add libc-level parity checks for new WASM host imports, and `packages/wasmvm/test/c-parity.test.ts` must list the new expected markers +--- + +## 2026-03-23 - US-013 + +## 2026-03-24 21:04 PDT - US-045 +- What was implemented +- Enforced socket-level non-blocking behavior in `SocketTable`: empty `accept()` and `recv()` now fail with `EAGAIN` when `nonBlocking` is enabled +- Added `SocketTable.setNonBlocking()` as the explicit toggle API for existing sockets +- Made external non-blocking `connect()` reject with `EINPROGRESS` while the host adapter connection completes asynchronously in the background +- Added focused tests for non-blocking `recv`, non-blocking `accept`, non-blocking external `connect`, and toggling the socket mode +- Updated the kernel contract with the new non-blocking socket semantics +- Files changed +- `.agent/contracts/kernel.md` +- `packages/core/src/kernel/socket-table.ts` +- `packages/core/src/kernel/types.ts` +- `packages/core/test/kernel/external-connect.test.ts` +- `packages/core/test/kernel/socket-flags.test.ts` +- `scripts/ralph/prd.json` +- `scripts/ralph/progress.txt` +- **Learnings for future iterations:** +- Patterns discovered +- Non-blocking socket mode is best modeled as per-socket state in `SocketTable`; `MSG_DONTWAIT` remains a per-call override layered on top +- Gotchas encountered +- Because `SocketTable.connect()` is async, returning `EINPROGRESS` for non-blocking external connects means rejecting the call immediately while separately completing the host connect path in a background promise +- Useful context +- Focused validation for this story is `pnpm vitest run packages/core/test/kernel/socket-flags.test.ts packages/core/test/kernel/external-connect.test.ts packages/core/test/kernel/socket-table.test.ts` and `pnpm tsc --noEmit -p packages/core/tsconfig.json` +--- +- Implemented external server socket routing via host adapter in SocketTable +- listen() is now async (Promise) with optional `{ external: true }` parameter +- When external: calls hostAdapter.tcpListen(), stores HostListener on KernelSocket, starts accept pump +- Accept pump loops on hostListener.accept(), creates kernel sockets for each incoming connection, starts read pumps +- Ephemeral port (port 0) updates localAddr and re-registers in listeners map with actual port from HostListener.port +- destroySocket() calls hostListener.close() for external listeners; disposeAll() also cleans up host listeners +- Updated all existing test files to use async/await for listen() calls (same pattern as connect() in US-012) +- Files changed: packages/core/src/kernel/socket-table.ts (async listen, hostListener on KernelSocket, startAcceptPump, destroySocket/disposeAll cleanup), packages/core/test/kernel/external-listen.test.ts (new, 14 tests), packages/core/test/kernel/socket-table.test.ts (async listen), packages/core/test/kernel/loopback.test.ts (async), packages/core/test/kernel/socket-flags.test.ts (async), packages/core/test/kernel/socket-shutdown.test.ts (async), packages/core/test/kernel/external-connect.test.ts (async), packages/core/test/kernel/network-permissions.test.ts (async) +- **Learnings for future iterations:** + - Making listen() async follows the same pattern as connect() — all callers need await, sync throw tests need .rejects.toThrow() + - MockHostListener.pushConnection() simulates incoming connections; pushData()/pushEof() on MockHostSocket controls data flow + - Ephemeral port 0 requires re-registering in the listeners map after getting the actual port from the host listener + - Accept pump is fire-and-forget like read pump — errors stop the pump silently (listener closed) + - disposeAll should iterate sockets and close both hostSocket and hostListener before clearing the maps +--- + +## 2026-03-23 - US-014 +- Implemented UDP datagram sockets (SOCK_DGRAM) in SocketTable +- sendTo(): loopback routing via findBoundUdp(), external routing via hostAdapter.udpSend(), silent drop for unbound ports +- recvFrom(): returns { data, srcAddr } with message boundary preservation, supports MSG_PEEK and MSG_DONTWAIT +- bindExternalUdp(): async setup for external UDP via hostAdapter.udpBind() with recv pump +- Separate udpBindings map from TCP listeners — TCP and UDP can share the same port +- UdpDatagram type, MAX_DATAGRAM_SIZE (65535), MAX_UDP_QUEUE_DEPTH (128) constants +- EMSGSIZE added to KernelErrorCode for oversized datagrams +- Updated poll() to check datagramQueue for UDP readability +- Updated destroySocket/disposeAll for hostUdpSocket cleanup and udpBindings cleanup +- Files changed: packages/core/src/kernel/types.ts (EMSGSIZE), packages/core/src/kernel/socket-table.ts (sendTo/recvFrom/bindExternalUdp/findBoundUdp/isUdpAddrInUse/startUdpRecvPump, udpBindings map, updated bind/poll/destroySocket/disposeAll), packages/core/src/kernel/index.ts (new exports), packages/core/test/kernel/udp-socket.test.ts (new, 25 tests) +- **Learnings for future iterations:** + - TCP and UDP must use separate binding maps (listeners vs udpBindings) because they are independent port namespaces — the same address key can exist in both + - findBoundUdp() matches sockets in 'bound' state (not 'listening') since UDP doesn't have a listen step + - UDP sendTo to unbound port returns data.length (not an error) — silent drop is correct UDP semantics + - Message boundary preservation: each sendTo = one datagramQueue entry; recvFrom pops one entry and truncates excess beyond maxBytes (unlike TCP which does partial chunk reads) + - External UDP pattern: bind() locally, then bindExternalUdp() creates the host UDP socket and starts a recv pump (startUdpRecvPump) — sendTo checks for hostUdpSocket before routing externally + - MockHostUdpSocket with pushDatagram() controls the recv pump in tests; use setTimeout(r, 10) to allow pump microtasks to run +--- + +## 2026-03-23 - US-015 +- Implemented Unix domain sockets (AF_UNIX) with VFS integration in SocketTable +- bind() with UnixAddr creates a socket file in VFS (S_IFSOCK mode), connect() checks VFS path exists +- SOCK_STREAM: full data exchange, half-close, EOF propagation — reuses existing loopback data paths +- SOCK_DGRAM: message boundary preservation via sendTo/recvFrom, silent drop for unbound paths +- Always in-kernel routing — no host adapter involvement for Unix sockets +- EADDRINUSE when path exists in VFS (including regular files, not just socket entries) +- ECONNREFUSED when socket file removed from VFS (even if listeners map still has entry) +- Modified InMemoryFileSystem.chmod() to support explicit file type bits (S_IFSOCK | perms) +- bind() is now async (Promise) — all existing test files updated with await +- Files changed: packages/core/src/kernel/socket-table.ts (VFS option, async bind, createSocketFile, connect VFS check, S_IFSOCK constant), packages/core/src/shared/in-memory-fs.ts (S_IFSOCK, chmod type bits), packages/core/src/kernel/index.ts (S_IFSOCK export), packages/core/test/kernel/unix-socket.test.ts (new, 14 tests), 8 existing test files (async bind migration) +- **Learnings for future iterations:** + - bind() is now async like connect() and listen() — all callers must use await; sync throw tests must use .rejects.toThrow() + - InMemoryFileSystem.chmod() supports caller-provided type bits: if mode & 0o170000 is non-zero, the type bits are used directly; otherwise existing behavior preserved + - VFS is optional for SocketTable — Unix sockets still work via listeners map alone; VFS adds socket file creation and path existence checks + - Unix domain sockets share the listeners map with TCP for SOCK_STREAM, and udpBindings map for SOCK_DGRAM — addrKey() uses the path string as the key + - connect() for Unix addresses checks VFS existence before listeners map — this means removing the socket file (vfs.removeFile) causes ECONNREFUSED even if the listener entry still exists +--- + +## 2026-03-23 - US-016 +- Exposed SocketTable as a public property on KernelImpl +- KernelImpl constructor creates SocketTable with VFS reference +- onProcessExit hook calls socketTable.closeAllForProcess(pid) to clean up sockets on process exit +- dispose() calls socketTable.disposeAll() before driver teardown +- Added 5 integration tests: expose check, create/close, dispose cleanup, process exit cleanup, loopback TCP +- Files changed: packages/core/src/kernel/types.ts (socketTable on Kernel interface), packages/core/src/kernel/kernel.ts (SocketTable import, property, constructor init, onProcessExit hook, dispose), packages/core/test/kernel/kernel-integration.test.ts (5 new tests) +- **Learnings for future iterations:** + - SocketTable.get() returns null (not undefined) for missing sockets — use toBeNull() in assertions + - Process exit cleanup chain: ProcessTable.markExited → onProcessExit callback → cleanupProcessFDs + socketTable.closeAllForProcess + - SocketTable constructor accepts { vfs } option — pass kernel's VFS for Unix domain socket file management + - dispose() order matters: terminateAll() first (triggers onProcessExit for each process), then disposeAll() for any remaining sockets, then driver teardown +--- + +## 2026-03-23 - US-017 +- Implemented TimerTable with per-process ownership, budget enforcement, and cross-process isolation +- KernelTimer struct: id, pid, delayMs, repeat, hostHandle, callback, cleared flag +- TimerTable: createTimer, clearTimer, get, getActiveTimers, countForProcess, setLimit, clearAllForProcess, disposeAll +- Budget enforcement: configurable defaultMaxTimers + per-process overrides via setLimit(); throws EAGAIN when exceeded +- Cross-process isolation: clearTimer with pid param rejects if caller doesn't own the timer (EACCES) +- Host scheduling delegation: hostHandle field on KernelTimer for callers to store setTimeout/setInterval handle +- Files changed: packages/core/src/kernel/timer-table.ts (new), packages/core/src/kernel/index.ts (exports), packages/core/test/kernel/timer-table.test.ts (new, 23 tests) +- **Learnings for future iterations:** + - TimerTable follows the same Map + nextId pattern as InodeTable and SocketTable + - Budget enforcement is inline in createTimer() — no separate enforceLimit() method needed; constructor option + setLimit() per-process override + - clearTimer without pid param allows unconditional clear (for kernel-internal cleanup); with pid enables cross-process isolation + - hostHandle is mutable on KernelTimer — callers set it after createTimer() returns, before the timer fires + - cleared flag lets callers check if a timer was cancelled (e.g., to skip callback invocation in the host scheduling loop) +--- + +## 2026-03-23 - US-018 +- Extended ProcessEntry with activeHandles (Map) and handleLimit (number, 0=unlimited) +- Added registerHandle(pid, id, description), unregisterHandle(pid, id), setHandleLimit(pid, limit), getHandles(pid) methods to ProcessTable +- Budget enforcement: registerHandle throws EAGAIN when activeHandles.size >= handleLimit (if limit > 0) +- Process exit cleanup: markExited() clears activeHandles before onProcessExit callback +- getHandles() returns a defensive copy to prevent external mutation of kernel state +- Files changed: packages/core/src/kernel/types.ts (ProcessEntry fields), packages/core/src/kernel/process-table.ts (handle methods + cleanup), packages/core/test/kernel/process-table.test.ts (13 new tests, 41 total) +- **Learnings for future iterations:** + - Handle tracking is simpler than TimerTable — no separate class needed, just Map fields on ProcessEntry + methods on ProcessTable + - EBADF is the right error for unknown handle IDs (not ENOENT) — consistent with FD error conventions + - Handle cleanup in markExited() must happen before onProcessExit callback to ensure consistent state for downstream cleanup hooks + - kernel-integration.test.ts has 2 pre-existing flaky stdin streaming failures unrelated to handle work +--- + +## 2026-03-23 - US-019 +- Implemented DnsCache class in packages/core/src/kernel/dns-cache.ts +- lookup(hostname, rrtype) returns cached DnsResult or null; expired entries return null and are lazily removed +- store(hostname, rrtype, result, ttlMs?) caches with TTL; uses configurable defaultTtlMs (30s) if not specified +- flush() clears all entries; size getter for entry count +- Cache key is "hostname:rrtype" composite string — distinguishes A vs AAAA for same hostname +- Files changed: packages/core/src/kernel/dns-cache.ts (new), packages/core/src/kernel/index.ts (exports), packages/core/test/kernel/dns-cache.test.ts (new, 16 tests) +- **Learnings for future iterations:** + - DnsCache is simpler than other kernel tables — no per-process ownership, no KernelError throws, just a TTL Map + - DnsResult type is imported from host-adapter.ts (address: string, family: 4|6) + - Lazy expiry: expired entries are removed on lookup, not by a background timer — keeps implementation simple + - vi.useFakeTimers()/vi.advanceTimersByTime() is the pattern for testing time-dependent behavior in vitest + - DnsCacheOptions follows the same constructor options pattern as TimerTableOptions +--- + +## 2026-03-23 - US-020 +- Implemented full POSIX sigaction/sigprocmask semantics in ProcessTable +- SignalHandler type: handler ('default' | 'ignore' | function), mask (sa_mask), flags (SA_RESTART, SA_NOCLDSTOP) +- ProcessSignalState on ProcessEntry: handlers Map, blockedSignals Set, pendingSignals Set +- sigaction(pid, signal, handler): registers handler, returns previous, rejects SIGKILL/SIGSTOP +- sigprocmask(pid, how, set): SIG_BLOCK/SIG_UNBLOCK/SIG_SETMASK, filters SIGKILL/SIGSTOP, delivers pending on unblock +- deliverSignal refactored: checks blocked → queue, checks handler → dispatch, default action for unregistered +- SIGCONT always resumes (POSIX) even when caught or blocked; handler invoked after resume +- SIGCHLD default action is now "ignore" (correct POSIX) — updated existing test to use registered handler +- Standard signals (1-31) coalesce via Set — only one pending per signal number +- Pending signals delivered in ascending signal number order +- sa_mask temporarily blocked during handler execution, restored after +- SIGALRM delivery now routes through handler system +- EINTR added to KernelErrorCode for future SA_RESTART integration +- Files changed: packages/core/src/kernel/types.ts (SignalHandler, ProcessSignalState, SA_RESTART, SA_NOCLDSTOP, SIG_BLOCK/UNBLOCK/SETMASK, EINTR, signalState on ProcessEntry), packages/core/src/kernel/process-table.ts (sigaction, sigprocmask, getSignalState, deliverSignal/dispatchSignal/applyDefaultAction/deliverPendingSignals refactor), packages/core/src/kernel/index.ts (new exports), packages/core/test/kernel/signal-handlers.test.ts (new, 28 tests), packages/core/test/kernel/process-table.test.ts (updated SIGCHLD test) +- **Learnings for future iterations:** + - SIGCONT is special: resume always happens regardless of handler/blocking — then handler is dispatched; other signals can be purely handler-overridden + - SIGCHLD default action is "ignore" per POSIX — tests expecting driverProcess.kill(SIGCHLD) need a registered handler + - Recursive deliverPendingSignals can cause double-dispatch — check pendingSignals.has(sig) before dispatching from snapshot array + - deliverSignal → dispatchSignal → applyDefaultAction three-level dispatch keeps POSIX semantics clean + - ProcessEntry.signalState is initialized in register() — no separate initialization step needed +--- + +## 2026-03-23 - US-021 +- Implemented concrete Node.js HostNetworkAdapter in packages/nodejs/src/host-network-adapter.ts +- NodeHostSocket: wraps net.Socket with queued-read model (data/EOF buffered, each read() returns next chunk or null) +- NodeHostListener: wraps net.Server with connection queue; accept() returns next HostSocket +- NodeHostUdpSocket: wraps dgram.Socket with message queue; recv() returns next datagram +- createNodeHostNetworkAdapter() factory: tcpConnect (net.connect), tcpListen (net.createServer), udpBind (dgram.createSocket), udpSend (dgram.send), dnsLookup (dns.lookup) +- Added HostNetworkAdapter/HostSocket/HostListener/HostUdpSocket/DnsResult type exports to @secure-exec/core main index.ts +- Exported createNodeHostNetworkAdapter from packages/nodejs/src/index.ts +- Files changed: packages/nodejs/src/host-network-adapter.ts (new), packages/nodejs/src/index.ts (export), packages/core/src/index.ts (type exports) +- **Learnings for future iterations:** + - Host adapter types were only in kernel/index.ts, not the core main index — had to add type exports to packages/core/src/index.ts + - After editing core exports, must rebuild core (`pnpm turbo run build --filter=@secure-exec/core`) before nodejs typecheck can see the new types + - The queued-read pattern (readQueue + waiters array) is reusable for any pull-based async reader wrapping push-based Node streams + - udpSend needs access to the underlying dgram.Socket — uses casting through the wrapper since HostUdpSocket interface is opaque + - HostSocket.setOption is a simple pass-through; real option-to-setsockopt mapping will be needed when wired into the kernel +--- + +## 2026-03-23 - US-022 +- Migrated Node.js FD table from in-isolate Map to host-side kernel ProcessFDTable +- Added 8 new bridge handler keys (fdOpen, fdClose, fdRead, fdWrite, fdFstat, fdFtruncate, fdFsync, fdGetPath) to bridge-contract.ts +- Added buildKernelFdBridgeHandlers() in bridge-handlers.ts — creates FDTableManager + ProcessFDTable per execution, delegates I/O to VFS +- Wired FD handlers into execution-driver.ts dispatch handlers (routed through _loadPolyfill bridge dispatch) +- Replaced all fdTable.get/set/has/delete in bridge/fs.ts with bridge calls to kernel FD handlers +- Removed fdTable Map, nextFd counter, MAX_BRIDGE_FDS, canRead(), canWrite() from bridge/fs.ts +- readSync/writeSync now use base64 encoding for binary data transfer across the bridge boundary +- Files changed: packages/nodejs/src/bridge-contract.ts (8 new keys), packages/nodejs/src/bridge-handlers.ts (buildKernelFdBridgeHandlers), packages/nodejs/src/execution-driver.ts (wiring + cleanup), packages/nodejs/src/bridge/fs.ts (fdTable removal, bridge call migration) +- **Learnings for future iterations:** + - Bridge globals not in the Rust V8 SYNC_BRIDGE_FNS are automatically dispatched through _loadPolyfill via BRIDGE_DISPATCH_SHIM — no Rust code changes needed for new bridge functions + - The dispatch shim JSON-serializes args and results, so binary data must be base64-encoded + - After modifying bridge source (bridge/fs.ts), the bridge IIFE must be rebuilt via `pnpm turbo run build --filter=@secure-exec/nodejs` for changes to take effect in tests + - FD operations (open/close/read/write/fstat) now go through the bridge dispatch; error messages must contain "EBADF"/"ENOENT" substrings for the in-isolate error wrapping to produce correct fs error codes + - ProcessFDTable from @secure-exec/core handles FD allocation, cursor tracking, and reference counting — bridge handlers don't need to implement these manually + - resource-budgets.test.ts has 7 pre-existing flaky failures unrelated to FD migration + - runtime.test.ts has 2 pre-existing PTY/setRawMode failures unrelated to FD migration +--- + +## 2026-03-23 - US-023 +- Migrated Node.js net.connect to route through kernel socket table instead of direct host TCP +- buildNetworkSocketBridgeHandlers now accepts optional socketTable + pid; when provided, uses kernel socket routing +- Kernel path: create kernel socket (sync, returns ID) → async connect → read pump dispatches data/end/close events +- Read pump uses socket.readWaiters.enqueue().wait() to block until data arrives, then dispatches via bridge events +- Fallback path preserved: when socketTable is not provided, original direct net.Socket behavior is used (backward compat) +- Added hostNetworkAdapter to KernelOptions and wired to SocketTable constructor for external connection routing +- Added socketTable to KernelInterface, exposed from createKernelInterface() in kernel.ts +- Added socketTable/pid to NodeExecutionDriverOptions, passed through execution-driver to bridge handlers +- kernel-runtime.ts passes kernel.socketTable and ctx.pid to NodeExecutionDriver +- Removed unused netSockets Map, nextNetSocketId, and netSocket* methods from createDefaultNetworkAdapter (driver.ts) +- Removed netSocket* methods from NetworkAdapter interface (core/types.ts) and permission wrappers (permissions.ts) +- Removed unused tls import from driver.ts +- Exported SocketTable, AF_INET, AF_INET6, AF_UNIX, SOCK_STREAM, SOCK_DGRAM from @secure-exec/core index +- TLS upgrade for external kernel sockets: accesses underlying net.Socket from NodeHostSocket for tls.connect wrapping +- Files changed: packages/core/src/kernel/types.ts (hostNetworkAdapter on KernelOptions, socketTable on KernelInterface), packages/core/src/kernel/kernel.ts (wire hostAdapter, expose socketTable on KernelInterface), packages/core/src/index.ts (SocketTable + constant exports), packages/core/src/types.ts (removed netSocket* from NetworkAdapter), packages/core/src/shared/permissions.ts (removed netSocket* wrappers), packages/nodejs/src/bridge-handlers.ts (kernel socket routing + fallback), packages/nodejs/src/execution-driver.ts (socketTable/pid passthrough), packages/nodejs/src/isolate-bootstrap.ts (socketTable/pid on options), packages/nodejs/src/kernel-runtime.ts (wire socketTable/pid), packages/nodejs/src/driver.ts (removed netSockets + tls import) +- **Learnings for future iterations:** + - SocketTable.close() requires both socketId AND pid — per-process isolation check + - The kernel's connect() is async but bridge handlers are sync — return socketId immediately, dispatch events async (matches existing bridge pattern) + - The read pump waits on socket.readWaiters (WaitQueue) for data — no polling needed + - External kernel sockets have hostSocket (NodeHostSocket) wrapping real net.Socket — TLS upgrade accesses the inner socket via casting + - NetworkAdapter.netSocket* methods were dead code — never called by any consumer; bridge handlers are the actual path + - When adding exports to @secure-exec/core index.ts, must rebuild core before downstream packages can see them +--- + +## 2026-03-23 - US-024 +- Migrated Node.js http.createServer to route through kernel socket table instead of adapter.httpServerListen +- When socketTable + pid available, bridge handler creates kernel socket → bind → listen (external: true) +- Kernel creates real TCP listener via hostAdapter.tcpListen(), accept pump feeds connections to local http.Server +- Created KernelSocketDuplex class (stream.Duplex) to bridge kernel sockets to Node http module for HTTP parsing +- Accept loop dequeues connections from kernel listener backlog and feeds them to http.Server via emit('connection') +- HTTP protocol parsing stays on host side (in Node http module) — kernel handles TCP, bridge handles HTTP +- For loopback: sandbox connect() pairs kernel sockets directly, no real TCP involved +- For external: hostAdapter.tcpListen creates real net.Server, kernel accept pump creates kernel sockets for incoming connections +- Added trackOwnedPort/untrackOwnedPort to NetworkAdapter interface for SSRF loopback exemption coordination +- Removed serverRequestListeners Map from bridge/network.ts — request listener stored directly on Server instance +- Changed buildNetworkBridgeHandlers to return NetworkBridgeResult { handlers, dispose } for kernel HTTP server cleanup +- Fallback adapter path preserved: when socketTable not provided, existing adapter.httpServerListen behavior is used +- Files changed: packages/core/src/types.ts (trackOwnedPort/untrackOwnedPort on NetworkAdapter), packages/nodejs/src/bridge-handlers.ts (kernel HTTP server path, KernelSocketDuplex, accept loop, NetworkBridgeResult), packages/nodejs/src/execution-driver.ts (socketTable/pid passthrough to network bridge, dispose on cleanup), packages/nodejs/src/driver.ts (trackOwnedPort/untrackOwnedPort impl), packages/nodejs/src/bridge/network.ts (serverRequestListeners removal, _requestListener on Server instance) +- **Learnings for future iterations:** + - http.Server + server.emit('connection', duplexStream) feeds kernel socket data through Node's HTTP parser without real TCP + - KernelSocketDuplex needs socket-like properties (remoteAddress, remotePort, setNoDelay, setKeepAlive, setTimeout) for Node http module compatibility + - The kernel's listen() with { external: true } starts an internal accept pump — bridge handler's accept loop calls socketTable.accept() to dequeue connections + - buildNetworkBridgeHandlers now returns { handlers, dispose } — dispose closes all kernel HTTP servers on execution cleanup + - trackOwnedPort/untrackOwnedPort coordinates SSRF exemption between kernel HTTP servers and adapter fetch/httpRequest until US-025 migrates SSRF fully to kernel + - servers Map and ownedServerPorts Set in driver.ts remain for adapter fallback path — full removal deferred to US-025 +--- + +## 2026-03-23 - US-025 +- Migrated SSRF validation from driver.ts NetworkAdapter to bridge-handlers.ts with kernel socket table awareness +- Added assertNotPrivateHost, isPrivateIp, isLoopbackHost functions to bridge-handlers.ts +- Bridge handler checks SSRF before calling adapter.fetch() and adapter.httpRequest() +- Kernel-aware loopback exemption: assertNotPrivateHost uses socketTable.findListener() to check if a port has a kernel listener +- Adapter retains defense-in-depth SSRF checks (assertNotPrivateHost in redirect loop and httpRequest) for non-bridge callers +- Removed trackOwnedPort/untrackOwnedPort from NetworkAdapter interface and driver.ts (kernel listener check replaces ownedServerPorts for loopback exemption) +- Removed adapter.trackOwnedPort/untrackOwnedPort calls from kernel HTTP server path in bridge-handlers.ts +- Files changed: packages/core/src/types.ts (removed trackOwnedPort/untrackOwnedPort from NetworkAdapter), packages/nodejs/src/bridge-handlers.ts (SSRF functions + fetch/httpRequest SSRF checks), packages/nodejs/src/driver.ts (adapter SSRF comments updated, trackOwnedPort removed) +- **Learnings for future iterations:** + - socketTable.findListener({ host: '127.0.0.1', port }) returns the listening kernel socket or null — use for loopback port ownership check + - Defense-in-depth: adapter keeps basic SSRF for redirect validation; bridge handler adds kernel-aware primary check + - When testing SSRF changes, ALWAYS rebuild the bridge IIFE (pnpm turbo run build --filter=@secure-exec/nodejs --force) — stale bridge code causes misleading test failures + - ownedServerPorts Set remains in driver.ts for the adapter fallback path (httpServerListen) but kernel path uses socketTable.findListener() exclusively +--- + +## 2026-03-23 - US-026 +- Migrated Node.js child process registry to kernel process table +- On spawn: allocates PID from processTable.allocatePid(), registers with processTable.register() +- On exit: calls processTable.markExited(pid, code) for kernel-level process lifecycle tracking +- On kill: routes through processTable.kill(pid, signal) instead of direct SpawnedProcess.kill +- Created wrapAsDriverProcess() to adapt SpawnedProcess to kernel DriverProcess interface (adds onStdout/onStderr/onExit stubs) +- Removed activeChildren Map from bridge/child-process.ts — replaced with childProcessInstances (event routing only, not process state) +- Process state (running/exited) now tracked by kernel process table; sandbox-side Map only dispatches stream events +- Exposed processTable on KernelInterface (types.ts) and KernelImpl (kernel.ts) +- Added processTable to NodeExecutionDriverOptions, wired through execution-driver.ts and kernel-runtime.ts +- spawnSync also registers with kernel process table and marks exited on completion +- Files changed: packages/core/src/kernel/types.ts (processTable on KernelInterface), packages/core/src/kernel/kernel.ts (expose processTable), packages/nodejs/src/bridge-handlers.ts (kernel registration in spawn/exit/kill, wrapAsDriverProcess), packages/nodejs/src/execution-driver.ts (processTable passthrough), packages/nodejs/src/isolate-bootstrap.ts (processTable option), packages/nodejs/src/kernel-runtime.ts (wire processTable), packages/nodejs/src/bridge/child-process.ts (activeChildren → childProcessInstances) +- **Learnings for future iterations:** + - DriverProcess has onStdout/onStderr/onExit callback properties that SpawnedProcess lacks — wrap with null stubs when adapting + - ProcessTable.register() requires ProcessContext with env/cwd/fds — env must not be undefined (use ?? {}) + - processTable is private on KernelImpl but exposed on KernelInterface — drivers access via kernel interface object + - sessionToPid Map bridges between bridge handler's sessionId (internal counter) and kernel PID + - Fallback path preserved: when processTable not provided, original non-kernel behavior unchanged +--- + +## 2026-03-24 - US-027 +- Routed WasmVM TCP socket operations through kernel SocketTable instead of driver-private _sockets Map +- Removed _sockets Map and _nextSocketId counter from driver.ts +- netSocket → kernel.socketTable.create(domain, type, protocol, pid) +- netConnect → await kernel.socketTable.connect(socketId, { host, port }) — hostAdapter handles real TCP +- netSend → kernel.socketTable.send(socketId, data, flags) — TLS-upgraded sockets write directly +- netRecv → kernel.socketTable.recv() with readWaiters wait for blocking reads on external sockets +- netClose → kernel.socketTable.close(socketId, pid) + TLS socket cleanup +- netPoll → kernel.socketTable.poll() for socket readability, kernel.fdPoll for pipes +- netTlsConnect → accesses hostSocket's underlying net.Socket for TLS upgrade, stores in _tlsSockets +- kernel-worker.ts: localToKernelFd.set(kernelSocketId, kernelSocketId) on net_socket, delete on net_close +- Test updated: createMockKernel() provides SocketTable + real HostNetworkAdapter (TestHostSocket wrapping node:net) +- Files changed: packages/wasmvm/src/driver.ts (socket handler migration, _sockets→kernel.socketTable), packages/wasmvm/src/kernel-worker.ts (localToKernelFd mapping for socket FDs), packages/wasmvm/test/net-socket.test.ts (mock kernel + scoped call helpers) +- **Learnings for future iterations:** + - Kernel recv() returns null for both "no data yet" and "EOF" — distinguish by checking socket.external + peerWriteClosed for external, peerId existence for loopback + - WaitHandle timeout goes in WaitQueue.enqueue(timeoutMs), not WaitHandle.wait() — wait() takes no args + - TLS upgrade accesses NodeHostSocket's private socket field via (hostSocket as any).socket — set hostSocket=undefined to detach kernel read pump + - SocketTable.close() requires both socketId AND pid for per-process ownership check + - Test kernel mock only needs socketTable + fdPoll — other kernel methods not needed for socket tests + - Kernel socket IDs are used directly as WASM FDs — identity mapping in localToKernelFd for poll consistency +--- + +## 2026-03-24 - US-028 +- Implemented bind/listen/accept WASI extensions for WasmVM server sockets +- Added net_bind, net_listen, net_accept extern declarations and safe Rust wrappers to native/wasmvm/crates/wasi-ext/src/lib.rs +- Added net_bind, net_listen, net_accept import handlers to packages/wasmvm/src/kernel-worker.ts +- Added netBind, netListen, netAccept RPC handler cases to packages/wasmvm/src/driver.ts +- Added EAGAIN and EADDRINUSE errno codes to packages/wasmvm/src/wasi-constants.ts +- **Learnings for future iterations:** + - WASI errno codes for EAGAIN=6 and EADDRINUSE=3 were missing from wasi-constants.ts — when adding new socket operations, check that all possible KernelError codes have WASI errno mappings + - accept() handler needs to wait on acceptWaiters when backlog is empty, with 30s timeout matching recv() pattern + - Address serialization for bind uses same "host:port" format as connect; unix sockets use bare path (no colon) + - net_accept returns new FD via intResult and remote address string via data buffer — same dual-channel pattern used by getaddrinfo + - Rust vendor directory is fetched at build time (make wasm), cargo check won't work without it +--- + +## 2026-03-24 - US-029 +- Extended 0008-sockets.patch with bind(), listen(), accept() C implementations in host_socket.c +- Added WASM import declarations: __host_net_bind, __host_net_listen, __host_net_accept +- bind() follows same sockaddr-to-string pattern as connect() (AF_INET/AF_INET6 → "host:port") +- listen() is a simple passthrough with backlog clamped to non-negative +- accept() calls __host_net_accept, parses returned "host:port" string back into sockaddr_in/sockaddr_in6 +- Un-gated bind() and listen() declarations in sys/socket.h (removed #if wasilibc_unmodified_upstream guard) +- accept()/accept4() were already un-gated in wasi-libc at pinned commit 574b88da +- Files changed: native/wasmvm/patches/wasi-libc/0008-sockets.patch +- **Learnings for future iterations:** + - accept/accept4 declarations are NOT behind the wasilibc_unmodified_upstream guard in the pinned wasi-libc commit (574b88da) — only bind/listen/connect/socket need un-gating + - Address string format from host is "host:port" — use strrchr for last colon to handle IPv6 addresses + - The build script (patch-wasi-libc.sh) removes conflicting .o files from libc.a — bind/listen/accept don't need removal since they have no wasip1 stubs + - Patch hunk line counts must be updated when adding/removing lines — @@ header second pair is the new file line range +--- + +## 2026-03-24 - US-030 +- Added net_sendto and net_recvfrom WASI extensions for WasmVM UDP +- Rust: added extern declarations and safe wrappers in native/wasmvm/crates/wasi-ext/src/lib.rs + - net_sendto(fd, buf_ptr, buf_len, flags, addr_ptr, addr_len, ret_sent) -> errno + - net_recvfrom(fd, buf_ptr, buf_len, flags, ret_received, ret_addr, ret_addr_len) -> errno + - sendto() wrapper: takes fd, buf, flags, addr → Result + - recvfrom() wrapper: takes fd, buf, flags, addr_buf → Result<(u32, u32), Errno> +- kernel-worker.ts: net_sendto handler reads data + addr from WASM memory, dispatches to netSendTo RPC +- kernel-worker.ts: net_recvfrom handler dispatches to netRecvFrom RPC, unpacks [data|addr] from combined buffer +- driver.ts: netSendTo parses "host:port" addr, calls kernel.socketTable.sendTo() +- driver.ts: netRecvFrom waits for datagram (30s timeout), packs [data|addr] into combined response buffer with intResult = data length +- Files changed: native/wasmvm/crates/wasi-ext/src/lib.rs, packages/wasmvm/src/kernel-worker.ts, packages/wasmvm/src/driver.ts +- **Learnings for future iterations:** + - RPC response only has { errno, intResult, data } — no string field; for multi-value returns, pack into data buffer and use intResult as split offset + - The responseData → SIG_IDX_DATA_LEN path overwrites manual Atomics.store calls — always use responseData = combined for correct data length signaling + - sendTo/recvFrom already exist on SocketTable (packages/core/src/kernel/socket-table.ts) — only WASI host import and RPC plumbing needed +--- + +## 2026-03-24 - US-031 +- Added sendto() and recvfrom() C implementations to 0008-sockets.patch +- Added AF_UNIX support in address serialization via sockaddr_to_string() / string_to_sockaddr() helper functions +- sockaddr_to_string: AF_INET/AF_INET6 → "host:port", AF_UNIX → path string +- string_to_sockaddr: "host:port" → sockaddr_in/sockaddr_in6, no colon → sockaddr_un +- sendto() calls __host_net_sendto with serialized addr; falls back to send() when dest_addr is NULL +- recvfrom() calls __host_net_recvfrom, parses returned addr via string_to_sockaddr; falls back to recv() when src_addr is NULL +- Refactored connect(), bind(), accept() to use the shared helper functions (removed duplicated address serialization code) +- Added sockaddr_un definition with __has_include guard (WASI libc doesn't provide sys/un.h) +- Updated WASM import declarations to include net_sendto and net_recvfrom (matching lib.rs signatures) +- Updated patch hunk line count from 518 to 628 +- Files changed: native/wasmvm/patches/wasi-libc/0008-sockets.patch +- **Learnings for future iterations:** + - WASI libc doesn't include sys/un.h or define AF_UNIX — must define sockaddr_un inline with __has_include guard + - Address convention: inet addresses as "host:port", unix as bare path (no colon) — driver uses lastIndexOf(':') to distinguish + - The driver's netConnect handler doesn't support unix paths yet (returns EINVAL) — only netBind handles both; this is a known gap for future stories + - __builtin_offsetof works in clang for computing sun_path offset in sockaddr_un + - Patch line counts in @@ headers must be updated manually when adding lines to a /dev/null → new file diff +--- + +## 2026-03-24 - US-032 +- Added tcp_server.c C test program: socket() → bind(port) → listen() → accept() → recv() → send("pong") → close() +- Added tcp_server to PATCHED_PROGRAMS in native/wasmvm/c/Makefile +- Added packages/wasmvm/test/net-server.test.ts: integration test that spawns tcp_server WASM, connects via kernel socketTable loopback, sends "ping", receives "pong", verifies stdout output +- Files changed: native/wasmvm/c/programs/tcp_server.c (new), native/wasmvm/c/Makefile (PATCHED_PROGRAMS), packages/wasmvm/test/net-server.test.ts (new) +- **Learnings for future iterations:** + - For WASM server tests, start kernel.exec() without awaiting, poll findListener() for readiness, then connect via socketTable loopback + - Client sockets in test use a fake PID (e.g., 999) — socketTable.create doesn't validate pid against process table + - Loopback connect() is synchronous inside the async function — no host adapter needed for kernel-to-kernel routing + - recv() may return null when WASM worker hasn't processed yet — poll with setTimeout to yield to event loop between retries + - tcp_server prints "listening on port N" after listen() and fflush(stdout) — useful for verifying server readiness in test output +--- + +## 2026-03-24 - US-033 +- Added udp_echo.c C test program: socket(SOCK_DGRAM) → bind(port) → recvfrom() → sendto() (echo) → close() +- Added udp_echo to PATCHED_PROGRAMS in native/wasmvm/c/Makefile +- Added packages/wasmvm/test/net-udp.test.ts: integration test that spawns udp_echo WASM, sends datagram via kernel socketTable, verifies echo response and message boundary preservation +- Made findBoundUdp() public on SocketTable (was private) — mirrors findListener() for TCP, needed by test to poll for UDP binding readiness +- Files changed: native/wasmvm/c/programs/udp_echo.c (new), native/wasmvm/c/Makefile (PATCHED_PROGRAMS), packages/wasmvm/test/net-udp.test.ts (new), packages/core/src/kernel/socket-table.ts (findBoundUdp visibility) +- **Learnings for future iterations:** + - findBoundUdp was private on SocketTable — needed to make it public for test polling (mirrors findListener for TCP) + - UDP server tests poll waitForUdpBinding() instead of waitForListener() — separate binding map from TCP listeners + - UDP client sockets need bind() to ephemeral port (port 0) before sendTo — otherwise the kernel has no source address for the reply + - The 0008-sockets.patch has a context drift issue (hunk #2 fails without --fuzz=3) — pre-existing issue, not caused by this story + - C programs compile natively with `cc -O0 -g -I include/ -o udp_echo programs/udp_echo.c` for quick verification +--- + +## 2026-03-24 - US-034 +- Implemented WasmVM Unix domain socket C test program and integration test +- Created native/wasmvm/c/programs/unix_socket.c: AF_UNIX server (socket → bind → listen → accept → recv → send "pong") +- Added unix_socket to PATCHED_PROGRAMS in Makefile +- Fixed packages/wasmvm/src/driver.ts netConnect handler to support Unix domain socket paths (no colon = Unix path, matching netBind pattern) +- Created packages/wasmvm/test/net-unix.test.ts: spawns unix_socket WASM, connects from kernel, verifies data exchange +- Files changed: native/wasmvm/c/programs/unix_socket.c (new), native/wasmvm/c/Makefile, packages/wasmvm/src/driver.ts, packages/wasmvm/test/net-unix.test.ts (new) +- **Learnings for future iterations:** + - netConnect in driver.ts was missing Unix domain socket path support — netBind had it but netConnect returned EINVAL for pathless addresses + - Unix socket C programs need fallback sockaddr_un definition since sys/un.h may not be available in WASI — the 0008-sockets.patch provides its own but __has_include guard is needed + - waitForUnixListener uses findListener({ path }) instead of findListener({ host, port }) — same method, different address type + - SimpleVFS needs /tmp directory created in beforeEach for unix socket files to be created by the kernel +--- + +## 2026-03-24 - US-035 +- Implemented WasmVM cooperative signal handler support: WASI extension, kernel integration, C sysroot patch, test program, integration test +- Added proc_sigaction to host_process module in native/wasmvm/crates/wasi-ext/src/lib.rs (signal, action) -> errno +- Extended SAB protocol with SIG_IDX_PENDING_SIGNAL slot in packages/wasmvm/src/syscall-rpc.ts for cooperative delivery +- Added sigaction RPC dispatch in packages/wasmvm/src/driver.ts — registers handler in kernel process table, piggybacking pending signals in RPC responses +- Added _wasmPendingSignals Map for per-PID signal queuing in driver +- Added proc_sigaction host import handler in packages/wasmvm/src/kernel-worker.ts +- Added cooperative signal delivery: after each rpcCall, check SIG_IDX_PENDING_SIGNAL and invoke wasmTrampoline +- Added wasmTrampoline wiring after WASM instantiation (reads __wasi_signal_trampoline export) +- Created 0011-sigaction.patch: signal() implementation + __wasi_signal_trampoline export in C sysroot +- Created native/wasmvm/c/programs/signal_handler.c: registers SIGINT handler, busy-loops with usleep, prints caught signal +- Added signal_handler to PATCHED_PROGRAMS in Makefile +- Created packages/wasmvm/test/signal-handler.test.ts: spawns signal_handler WASM, delivers SIGINT via ManagedProcess.kill(), verifies handler fires +- Files changed: native/wasmvm/crates/wasi-ext/src/lib.rs, packages/wasmvm/src/syscall-rpc.ts, packages/wasmvm/src/driver.ts, packages/wasmvm/src/kernel-worker.ts, native/wasmvm/patches/wasi-libc/0011-sigaction.patch (new), native/wasmvm/c/programs/signal_handler.c (new), native/wasmvm/c/Makefile, packages/wasmvm/test/signal-handler.test.ts (new) +- **Learnings for future iterations:** + - Kernel public Kernel interface has no kill(pid, signal) — use ManagedProcess.kill() from spawn() for tests, or kernel.processTable.kill() internally + - SignalDisposition type is exported from @secure-exec/core kernel index but NOT from the main package entry point — use inline type or import from kernel path + - Cooperative signal delivery architecture: handler registered in kernel is a JS callback that queues to _wasmPendingSignals; driver piggybacking delivers one signal per RPC response in SIG_IDX_PENDING_SIGNAL; worker reads it and calls wasmTrampoline + - C sysroot signal handling: signal() stores handler in static table + calls proc_sigaction WASM import; __wasi_signal_trampoline dispatches to stored handler + - Signals only delivered at syscall boundaries (fundamental WASM limitation) — long compute loops without syscalls won't see signals + - Pre-existing test failures in fd-table.test.ts, wasi-polyfill.test.ts, net-socket.test.ts, resource-exhaustion.test.ts — not related to this work +--- + +## 2026-03-24 - US-036 +- Implemented cross-runtime network integration test in packages/secure-exec/tests/kernel/cross-runtime-network.test.ts +- Three tests: (1) WasmVM tcp_server ↔ Node.js net.connect data exchange, (2) Node.js http.createServer ↔ WasmVM http_get HTTP exchange, (3) loopback verification via direct kernel socket table access +- Uses createKernel with both WasmVM (C_BUILD_DIR + COMMANDS_DIR) and Node.js runtimes mounted +- Skip-guarded for missing WASM binaries (tcp_server, http_get) +- Files changed: packages/secure-exec/tests/kernel/cross-runtime-network.test.ts (new) +- **Learnings for future iterations:** + - createIntegrationKernel helper only includes COMMANDS_DIR (Rust binaries); for C WASM programs, create kernel manually with commandDirs: [C_BUILD_DIR, COMMANDS_DIR] + - http_get.c is a ready-made HTTP client C program that does GET and prints body — useful for cross-runtime HTTP tests + - waitForListener() pattern: poll kernel.socketTable.findListener() in a loop for server readiness + - For long-running server processes, use kernel.spawn() with kill() cleanup; for one-shot servers (like tcp_server), use kernel.exec() which completes after one connection +--- + +## 2026-03-24 - US-037 +- Re-ran full Node.js conformance suite (3532 tests) after kernel consolidation +- Genuine pass rate improved from 11.3% (399/3532) to 19.9% (704/3532) — 305 new genuine passes +- 357 tests that were expected-fail now genuinely pass — removed their expectations +- 49 previously-passing tests now fail due to implementation gaps — added specific failure reasons +- 38 tests passing under glob-match patterns got pass overrides +- FIX-01 (HTTP server tests): 183 of 492 tests now pass (37% resolved) +- Files changed: expectations.json (restored + updated), runner.test.ts (restored), common/ shims (restored), conformance-report.json, nodejs-compat-roadmap.md, package.json (minimatch dep) +- **Learnings for future iterations:** + - The conformance runner was deleted in commit 2783baf3 — needs to be restored from git history before running + - Tests marked `expected: "fail"` that hang forever still time out and fail vitest — use `expected: "skip"` for tests that hang + - Glob patterns in expectations.json need explicit pass overrides for individual tests that now genuinely pass + - `minimatch` npm package is needed for the conformance runner (glob pattern matching) + - Full conformance suite takes ~3-5 minutes to run (3532 tests at 30s timeout each) + - Newly failing tests (regressions from expected-pass) need investigation and proper categorization +--- + +## 2026-03-24 - US-038 +- Reclassified dgram, net, tls, https, http2 conformance test expectations from `unsupported-module` to `implementation-gap` +- Re-ran all 735 tests across 5 network modules: 38 genuinely pass, 697 fail (same as before reclassification) +- Failure breakdown: 494 assertion failures (API gaps), 169 missing fixture files (TLS certs), 16 timeouts, 13 cluster-dependent, 5 other +- Updated expectations.json: glob patterns reclassified, individual pass overrides preserved +- Updated conformance-report.json with correct module-level counts +- Updated docs-internal/nodejs-compat-roadmap.md: unsupported-module 1226→735, implementation-gap 762→1366 +- Files changed: expectations.json, conformance-report.json, nodejs-compat-roadmap.md, prd.json +- **Learnings for future iterations:** + - When running conformance tests with `-t "node/"`, expected-fail tests that actually fail show as vitest PASSES — don't confuse this with the test genuinely passing + - To find genuinely passing tests, you must check the vitest JSON output for `status: "passed"` vs failure messages containing "expected to fail but passed" + - Most TLS/HTTPS conformance failures are from missing fixture files (certs, keys) not loaded into the VFS, not from actual API gaps + - dgram and net failures are mostly API assertion failures — the kernel socket table provides the transport but the bridge surface area has gaps + - http2 has the most failures (252) — mostly assertion failures in protocol handling +--- + +## 2026-03-24 - US-039 +- Completed adversarial proofing audit of kernel consolidation implementation +- Verified WasmVM driver.ts is fully migrated — no legacy _sockets or _nextSocketId +- Verified kernel path exists for http.createServer (socketTable.create → bind → listen) +- Verified kernel path exists for net.connect (socketTable.create → socketTable.connect) +- Verified host-network-adapter.ts has no SSRF validation (clean delegation) +- Verified kernel checkNetworkPermission() covers connect, listen, send, sendTo, externalListen +- Documented 4 remaining gaps as future work (legacy adapter fallback paths) +- Created docs-internal/kernel-consolidation-audit.md with full findings +- Files changed: docs-internal/kernel-consolidation-audit.md (new), prd.json, progress.txt +- **Learnings for future iterations:** + - The legacy adapter path (createDefaultNetworkAdapter in driver.ts) still has servers/ownedServerPorts/upgradeSockets Maps because createNodeRuntimeDriverFactory creates drivers without kernel routing + - Bridge-side activeNetSockets Map in bridge/network.ts is event routing only (like childProcessInstances) — it maps socket IDs to bridge NetSocket instances for dispatching host events + - SSRF validation is intentionally duplicated: bridge-handlers.ts has kernel-aware version (socketTable.findListener), driver.ts has adapter version (ownedServerPorts) — the adapter copy is defense-in-depth for the fallback path + - Removing the legacy adapter networking requires migrating NodeRuntime to use KernelNodeRuntime as its backing implementation — this is a separate workstream +--- + +## 2026-03-24 - Completion +- All user stories US-001 through US-039 now have passes: true +- Committed completion marker: c5523e80 +--- + +## 2026-03-24 17:13 PDT - US-040 +- Removed the adapter-managed HTTP server surface from `NetworkAdapter` and its permission wrapper/stub so Node runtime networking stays client-only at the adapter layer while server/listener state remains kernel-managed +- Deleted the legacy loopback HTTP server implementation from `packages/nodejs/src/default-network-adapter.ts`; kept only fetch/DNS/httpRequest plus upgrade-socket callbacks for client-side upgrade flows +- Updated runtime-driver tests to stop calling `adapter.httpServerListen/httpServerClose` directly and instead cover kernel-backed server behavior with sandbox `http.createServer()`, loopback checker usage, and `initialExemptPorts` where host-side requests need to reach a sandbox listener +- Synced docs/contracts to describe the narrower `NetworkAdapter` surface and the fact that standalone `NodeRuntime` still provisions an internal `SocketTable` for kernel-backed socket routing +- Quality checks run: + - `pnpm tsc --noEmit -p packages/core/tsconfig.json` ✅ + - `pnpm tsc --noEmit -p packages/nodejs/tsconfig.json` ✅ + - `pnpm tsc --noEmit -p packages/secure-exec/tsconfig.json` ✅ + - `pnpm vitest run packages/secure-exec/tests/test-suite/node.test.ts` ✅ + - `pnpm vitest run packages/secure-exec/tests/runtime-driver/` ❌ blocked by pre-existing unrelated failures; first concrete failure was `packages/secure-exec/tests/runtime-driver/node/hono-fetch-external.test.ts` with `Cannot read properties of null (reading 'compileScript')` +- Files changed: packages/core/src/types.ts, packages/core/src/shared/permissions.ts, packages/nodejs/src/default-network-adapter.ts, packages/secure-exec/tests/permissions.test.ts, packages/secure-exec/tests/runtime-driver/node/index.test.ts, packages/secure-exec/tests/runtime-driver/node/ssrf-protection.test.ts, packages/secure-exec/tests/runtime-driver/node/resource-budgets.test.ts, packages/secure-exec/tests/runtime-driver/node/bridge-hardening.test.ts, docs/api-reference.mdx, docs/features/networking.mdx, docs/system-drivers/node.mdx, docs-internal/arch/overview.md, .agent/contracts/node-runtime.md, progress.txt +- **Learnings for future iterations:** + - Standalone `NodeRuntime` no longer needs adapter-managed HTTP server helpers; `NodeExecutionDriver` already provisions a kernel `SocketTable` with a Node host adapter for listen/connect routing + - Keep `upgradeSocketWrite/End/Destroy` and `setUpgradeSocketCallbacks` on `NetworkAdapter` — they are still required for client-side HTTP upgrade flows even after removing adapter-managed server listeners + - Host-side tests that need to reach sandbox listeners are more reliable with fixed ports plus `initialExemptPorts` than with reintroducing owned-port bookkeeping into the adapter + - The required `packages/secure-exec/tests/runtime-driver/` command is currently red for unrelated branch issues, so US-040 should not be marked passing or committed until that suite is green +--- + +## 2026-03-24 17:22 PDT - US-040 +- Continued the US-040 cleanup already in progress and removed the now-unused `buildUpgradeSocketBridgeHandlers()` helper from `packages/nodejs/src/bridge-handlers.ts` +- Updated the bridge comment to reflect kernel-only TCP routing and added a bridge-side loopback checker that derives host-side loopback allowances from the active kernel-backed HTTP server set +- Re-ran focused verification after the bridge cleanup: + - `pnpm --filter @secure-exec/nodejs exec tsc --noEmit` ✅ + - `pnpm --filter secure-exec exec tsc --noEmit` ✅ + - `pnpm vitest run packages/nodejs/test/legacy-networking-policy.test.ts packages/secure-exec/tests/test-suite/node.test.ts packages/secure-exec/tests/runtime-driver/node/ssrf-protection.test.ts` ✅ + - `pnpm vitest run packages/secure-exec/tests/runtime-driver/node/index.test.ts -t "serves requests through bridged http.createServer and host network fetch|coerces 0.0.0.0 listen to loopback for strict sandboxing|can terminate a running sandbox HTTP server from host side|http.Agent with maxSockets=1 serializes concurrent requests"` ❌ still blocked by the broader Node runtime worktree; the sandbox HTTP server path never reaches `listen()` there, so SSRF remains blocked as a downstream symptom +- Files changed: packages/nodejs/src/bridge-handlers.ts, scripts/ralph/progress.txt +- **Learnings for future iterations:** + - The source-level policy test in `packages/nodejs/test/legacy-networking-policy.test.ts` is a good guardrail for this story; keep it when refactoring bridge/driver networking internals + - A passing SSRF adapter test does not prove host-side `runtime.network.fetch()` can reach sandbox listeners; that path also depends on the broader Node runtime successfully constructing the bridged HTTP server + - When the host-side sandbox HTTP server tests fail with SSRF, verify that the sandbox server actually reached `listen()` before assuming the loopback checker is the primary bug +--- + +## 2026-03-24 19:16 PDT - US-040 +- Finished the kernel-only HTTP bridge path by wiring `_networkHttpServerRespondRaw` and `_networkHttpServerWaitRaw` through the shared bridge contracts, Node bridge globals, and native V8 bridge registries +- Fixed the native V8 response receiver so sync bridge calls only consume matching `call_id` responses and defer unrelated `BridgeResponse` frames back to the event loop; this unblocked bridged `http.createServer()` shutdown/wait flows that were previously timing out +- Propagated `SocketTable.shutdown()` to real host sockets so accepted external TCP connections observe EOF correctly, and filled the shared custom-global inventory gaps that the bridge policy test surfaced +- Files changed: .agent/contracts/node-bridge.md, native/v8-runtime/src/host_call.rs, native/v8-runtime/src/session.rs, packages/core/src/kernel/socket-table.ts, packages/core/src/shared/bridge-contract.ts, packages/core/src/shared/global-exposure.ts, packages/core/test/kernel/external-listen.test.ts, packages/nodejs/src/bridge-contract.ts, packages/nodejs/src/bridge-handlers.ts, packages/nodejs/src/bridge/network.ts, packages/nodejs/src/execution-driver.ts, packages/nodejs/test/kernel-http-bridge.test.ts, packages/nodejs/test/legacy-networking-policy.test.ts, packages/secure-exec/tests/bridge-registry-policy.test.ts, packages/v8/src/runtime.ts, packages/v8/test/runtime-binary-resolution-policy.test.ts +- Quality checks run: + - `cargo build --release` in `native/v8-runtime` ✅ + - `pnpm tsc -p packages/v8/tsconfig.json` ✅ + - `pnpm turbo run build --filter=@secure-exec/nodejs` ✅ + - `pnpm vitest run packages/secure-exec/tests/runtime-driver/node/index.test.ts -t "serves requests through bridged http.createServer and host network fetch|coerces 0.0.0.0 listen to loopback for strict sandboxing|can terminate a running sandbox HTTP server from host side|http.Agent with maxSockets=1 serializes concurrent requests"` ✅ + - `pnpm vitest run packages/core/test/kernel/external-listen.test.ts packages/nodejs/test/kernel-http-bridge.test.ts packages/nodejs/test/legacy-networking-policy.test.ts packages/v8/test/runtime-binary-resolution-policy.test.ts` ✅ + - `pnpm vitest run packages/secure-exec/tests/bridge-registry-policy.test.ts` ✅ +- **Learnings for future iterations:** + - Bridged HTTP server hangs can come from native response routing, not just JS bridge state; check whether sync bridge calls are consuming the wrong `BridgeResponse` + - `packages/v8/src/runtime.ts` prefers the local cargo-built runtime binary in `native/v8-runtime/target/{release,debug}` before packaged binaries, so rebuild that binary when changing native bridge/session code + - The custom-global inventory policy test is valuable for catching drift between bridge contracts and the actual runtime/global surface; update the inventory instead of weakening the test when the bridge surface legitimately grows +--- + +## 2026-03-24 20:07 PDT - US-041 +- What was implemented +- Fixed stale WasmVM C build inputs so the patched wasi-libc sysroot and C programs build locally again +- Corrected socket/syscall patch drift in the native wasm sysroot patches and fixed malformed patch application for `host_spawn_wait.c` +- Updated WasmVM socket handling so host-net sockets use worker-local FDs instead of raw kernel socket IDs, and normalized wasi-libc socket constants before routing into `SocketTable` +- Added cooperative signal polling during WASI `poll_oneoff` sleep so `signal_handler` observes pending SIGINT while sleeping +- Verified `native/wasmvm/c` programs compile and the `net-server`, `net-udp`, `net-unix`, and `signal-handler` WasmVM tests execute and pass +- Files changed +- `native/wasmvm/c/Makefile` +- `native/wasmvm/patches/wasi-libc/0002-spawn-wait.patch` +- `native/wasmvm/patches/wasi-libc/0008-sockets.patch` +- `native/wasmvm/patches/wasi-libc/0011-sigaction.patch` +- `native/wasmvm/scripts/patch-wasi-libc.sh` +- `packages/wasmvm/src/driver.ts` +- `packages/wasmvm/src/kernel-worker.ts` +- `packages/wasmvm/src/wasi-polyfill.ts` +- `packages/wasmvm/src/wasi-types.ts` +- **Learnings for future iterations:** +- Patterns discovered +- `host_net` imports from wasi-libc use bottom-half/WASI socket constants (`AF_INET=1`, `AF_UNIX=3`, `SOCK_DGRAM=5`, `SOCK_STREAM=6`), so the WasmVM bridge must normalize them before touching the shared kernel socket table +- Worker-local socket FDs need the same local-to-kernel mapping discipline as files/pipes; raw kernel socket IDs are not safe to expose to WASM code +- Gotchas encountered +- `poll_oneoff` sleep is entirely local to the worker unless you explicitly tick back through RPC, so pending cooperative signals will starve during `usleep()` loops +- The old `0002-spawn-wait.patch` add-file header was malformed (`+++ libc-bottom-half/...`), which causes patch application to place the file outside the intended vendor path +- Useful context +- The CI failure on this branch was not just the reported crossterm symptom; the first hard failures were in the patched wasi-libc sysroot/socket/signal patch application path and stale zlib/minizip fetch URLs +--- + +## 2026-03-24 20:39 PDT - US-042 +- What was implemented +- Wired `KernelImpl` to own and expose `timerTable`, clear process timers on exit, and dispose timer state with the kernel +- Replaced bridge-local timer and active-handle tracking with kernel-backed dispatch handlers so Node.js bridge budgets are enforced by `TimerTable` and `ProcessTable` +- Added `_timerDispatch` stream delivery so host timers invoke bridge callbacks without leaving standalone `exec()` stuck on pending async bridge promises +- Added focused core and nodejs tests covering kernel timer exposure, process-exit cleanup, and kernel-backed timer/handle budget enforcement +- Files changed +- `packages/core/src/kernel/kernel.ts` +- `packages/core/src/kernel/types.ts` +- `packages/core/src/index.ts` +- `packages/core/test/kernel/kernel-integration.test.ts` +- `packages/core/src/shared/bridge-contract.ts` +- `packages/core/src/shared/global-exposure.ts` +- `packages/core/isolate-runtime/src/common/runtime-globals.d.ts` +- `packages/nodejs/src/bridge/process.ts` +- `packages/nodejs/src/bridge/active-handles.ts` +- `packages/nodejs/src/bridge/dispatch.ts` +- `packages/nodejs/src/bridge-handlers.ts` +- `packages/nodejs/src/execution-driver.ts` +- `packages/nodejs/src/isolate-bootstrap.ts` +- `packages/nodejs/src/kernel-runtime.ts` +- `packages/nodejs/src/bridge-contract.ts` +- `packages/nodejs/test/kernel-resource-bridge.test.ts` +- `native/v8-runtime/src/stream.rs` +- `.agent/contracts/kernel.md` +- `.agent/contracts/node-runtime.md` +- **Learnings for future iterations:** +- Patterns discovered +- Kernel-backed bridge operations fit best behind `_loadPolyfill` `__bd:` dispatch handlers; only add a runtime global when the host needs to push an event into the isolate, like `_timerDispatch` +- Standalone `NodeRuntime.exec()` and kernel-managed `node` processes need different timer-liveness semantics; standalone mode should clean up host timers without treating them as resources that keep `exec()` open +- Gotchas encountered +- Driving timer callbacks through pending async bridge promises causes delayed timers to keep standalone executions alive until timeout; use stream-event delivery for timer callbacks instead +- Kernel budget errors need bridge-side mapping back to the existing `ERR_RESOURCE_BUDGET_EXCEEDED` shapes so current tests and user-facing errors stay stable +- Useful context +- The focused `kernel-resource-bridge` test exercises the external-kernel path directly by injecting a shared `ProcessTable` and `TimerTable` into `NodeExecutionDriver` +--- + +## 2026-03-24 20:50 PDT - US-043 +- What was implemented +- Routed WasmVM `net_setsockopt` through the kernel socket table instead of returning `ENOSYS` +- Added `netGetsockopt` and `net_getsockopt` plumbing so socket options round-trip across the worker RPC boundary as raw bytes +- Tightened WasmVM socket address parsing so AF_INET sockets reject path-style addresses with `EINVAL` instead of being misrouted as AF_UNIX +- Files changed +- `CLAUDE.md` +- `packages/wasmvm/src/driver.ts` +- `packages/wasmvm/src/kernel-worker.ts` +- `packages/wasmvm/test/net-socket.test.ts` +- `scripts/ralph/prd.json` +- `scripts/ralph/progress.txt` +- **Learnings for future iterations:** +- Patterns discovered +- WasmVM `host_net` passes socket option values as little-endian byte slices, not JS numbers; convert at the driver boundary before calling `kernel.socketTable` +- `kernel-worker.ts` should stay as a thin marshal layer for `host_net` imports; keep kernel semantics in `packages/wasmvm/src/driver.ts` +- Gotchas encountered +- For WasmVM socket RPCs, only AF_UNIX sockets should treat colon-free addresses as paths; AF_INET/AF_INET6 should reject them with `EINVAL` +- Useful context +- The focused validation for this path is `pnpm vitest run packages/wasmvm/test/net-socket.test.ts` plus `pnpm tsc --noEmit` from `packages/wasmvm` +--- + +## 2026-03-24 20:59 PDT - US-044 +- What was implemented +- Added signal-delivery tracking to `ProcessTable` and a signal-aware blocking mode on `SocketTable.accept()` / `SocketTable.recv()` so blocking waits now return `EINTR` or transparently restart when the delivered handler carries `SA_RESTART` +- Wired `KernelImpl` to provide `getSignalState` to the shared socket table and added focused kernel tests for `recv` EINTR, `recv` restart, and `accept` restart behavior +- Updated the kernel contract to document socket wait interruption semantics +- Files changed +- `.agent/contracts/kernel.md` +- `packages/core/src/kernel/kernel.ts` +- `packages/core/src/kernel/process-table.ts` +- `packages/core/src/kernel/socket-table.ts` +- `packages/core/src/kernel/types.ts` +- `packages/core/src/kernel/wait.ts` +- `packages/core/test/kernel/signal-handlers.test.ts` +- `scripts/ralph/prd.json` +- `scripts/ralph/progress.txt` +- **Learnings for future iterations:** +- Patterns discovered +- Signal-aware socket waits need both an edge-trigger (`signalWaiters`) and a monotonic sequence (`deliverySeq`) to avoid lost wake-ups when a signal lands between the pre-check and waiter registration +- Keep `SocketTable` backward-compatible by layering blocking signal semantics behind overloads/options instead of changing the existing immediate `accept()` / `recv()` behavior used across the bridge and tests +- Gotchas encountered +- `SA_RESTART` only matters for delivered handlers; ignored signals and default-ignored `SIGCHLD` should not spuriously wake blocking socket waits +- Wait queues need explicit waiter removal for `Promise.race()`-style waits or settled signal/socket handles accumulate in the queue +- Useful context +- Focused validation for this path is `pnpm tsc --noEmit -p packages/core/tsconfig.json`, `pnpm tsc --noEmit -p packages/nodejs/tsconfig.json`, and `pnpm vitest run packages/core/test/kernel/signal-handlers.test.ts packages/core/test/kernel/socket-table.test.ts packages/core/test/kernel/socket-flags.test.ts packages/core/test/kernel/socket-shutdown.test.ts packages/core/test/kernel/loopback.test.ts` +--- + +## 2026-03-24 21:10 PDT - US-046 +- What was implemented +- Added bounded listener backlogs to `SocketTable.listen()` and refused excess loopback connections with `ECONNREFUSED` instead of letting pending connections grow without limit +- Added kernel-managed ephemeral port assignment for `bind({ port: 0 })` in the 49152-65535 range, while preserving the original port-0 intent so external host-backed listeners still delegate ephemeral selection to the host adapter +- Updated the kernel contract and root agent guidance to capture the backlog and ephemeral-port expectations +- Quality checks run: +- `pnpm tsc --noEmit -p packages/core/tsconfig.json` ✅ +- `pnpm vitest run packages/core/test/kernel/socket-table.test.ts packages/core/test/kernel/external-listen.test.ts` ✅ +- `pnpm vitest run packages/core/test/kernel/loopback.test.ts` ✅ +- Files changed +- `.agent/contracts/kernel.md` +- `CLAUDE.md` +- `packages/core/src/kernel/socket-table.ts` +- `packages/core/test/kernel/socket-table.test.ts` +- `scripts/ralph/prd.json` +- `scripts/ralph/progress.txt` +- **Learnings for future iterations:** +- Patterns discovered +- `listen(backlog)` needs a stored per-socket backlog limit because both loopback `connect()` and the external accept pump enqueue through the same listener backlog +- Preserving `port: 0` intent separately from the kernel-assigned temporary port avoids breaking external listeners that still need host-side ephemeral assignment +- Gotchas encountered +- `AGENTS.md` is a symlink to `CLAUDE.md` at repo root, so updating root agent guidance shows up as a `CLAUDE.md` diff +- Useful context +- Focused regression coverage for this story is `packages/core/test/kernel/socket-table.test.ts`, `packages/core/test/kernel/external-listen.test.ts`, and `packages/core/test/kernel/loopback.test.ts` +--- + +## 2026-03-24 21:47 PDT - US-048 +- What was implemented +- Validated the existing `US-048` inode/VFS integration work in the dirty tree instead of adding more code this turn +- Confirmed `pnpm tsc --noEmit` and `pnpm vitest run test/kernel/inode-table.test.ts` pass in `packages/core` +- Confirmed the full `packages/core` suite is still blocked by the unrelated PTY stress failure in `test/kernel/resource-exhaustion.test.ts` (`single large write (1MB+) — immediate EAGAIN, no partial buffering`, assertion at line 270) +- Checked recent branch CI history with `gh run list`; recent PR runs on `ralph/kernel-consolidation` were already failing before this story was ready to commit +- Files changed +- `scripts/ralph/progress.txt` +- **Learnings for future iterations:** +- Patterns discovered +- When a full-package gate is already red, record both the focused story checks and the first failing broad-suite test so the next iteration can separate story regressions from branch-wide blockers quickly +- Gotchas encountered +- `US-048` appears implementation-complete locally, but it should not be committed while `packages/core` is still red on the unrelated PTY resource-exhaustion test +- Useful context +- Current green checks: `pnpm tsc --noEmit` and `pnpm vitest run test/kernel/inode-table.test.ts` from `packages/core`; current blocking check: `pnpm vitest run` +--- + +## 2026-03-24 21:55 PDT - US-048 +- What was implemented +- Completed the `US-048` inode/VFS integration by wiring `kernel.inodeTable` into `KernelImpl` and `InMemoryFileSystem`, tracking stable inode IDs through file creation, stat, hard links, unlink, and last-FD cleanup +- Updated kernel FD lifecycle paths to keep inode-backed access alive after unlink via `FileDescription.inode`, including read/write, pread/pwrite, seek/stat, dup2 replacement, inherited FD overrides, and whole-process teardown +- Added inode integration coverage for real `ino`/`nlink`, deferred unlink readability, last-close cleanup, and `pwrite` on unlinked open files +- Unblocked package quality gates with a type-only isolate-runtime globals declaration fix and a PTY raw-mode bulk-write fix so oversized writes with `icrnl` enabled fail atomically with `EAGAIN` +- Quality checks run +- `pnpm --dir packages/core run check-types` ✅ +- `pnpm --dir packages/core test` ✅ +- Files changed +- `.agent/contracts/kernel.md` +- `AGENTS.md` +- `packages/core/isolate-runtime/src/common/runtime-globals.d.ts` +- `packages/core/src/kernel/kernel.ts` +- `packages/core/src/kernel/pty.ts` +- `packages/core/src/kernel/types.ts` +- `packages/core/src/shared/in-memory-fs.ts` +- `packages/core/test/kernel/inode-table.test.ts` +- `scripts/ralph/prd.json` +- `scripts/ralph/progress.txt` +- **Learnings for future iterations:** +- Patterns discovered +- `KernelImpl` needs access to the raw `InMemoryFileSystem` alongside the wrapped VFS so open FDs can keep reading, writing, and stat'ing by inode after pathname removal +- File-description cleanup is broader than `fdClose()`; `dup2()` replacement, stdio overrides during spawn, and process-table teardown all need inode refcount release when a shared description reaches `refCount === 0` +- `InMemoryFileSystem.reindexInodes()` must preserve shared inode identity across hard links when rebinding an existing filesystem to the kernel-owned inode table +- Gotchas encountered +- The package `check-types` gate also covers `isolate-runtime`, so missing runtime-global declarations can block kernel stories even when `packages/core/src` itself typechecks +- PTY raw mode still respects `icrnl`; bulk-write fast paths must keep translation and buffer-limit enforcement atomic to avoid partial buffering on `EAGAIN` +- Useful context +- Full `packages/core` now passes again, including the previously failing `test/kernel/resource-exhaustion.test.ts` +--- + +## 2026-03-24 22:02 PDT - US-049 +- What was implemented +- Added synthetic `.` and `..` entries to `InMemoryFileSystem` directory listings, with optional inode metadata on `VirtualDirEntry` so self/parent entries can carry the correct directory identity +- Added focused inode/VFS tests for `/tmp` listings, self/parent inode numbers, and root `..` behavior +- Filtered those POSIX-only entries back out in the Node bridge `fsReadDir` handler so sandbox `fs.readdir()` keeps Node-compatible output +- Added a Node bridge regression test covering the filter +- Updated the kernel contract for the in-memory VFS directory-listing rule +- Files changed +- `.agent/contracts/kernel.md` +- `packages/core/src/kernel/vfs.ts` +- `packages/core/src/shared/in-memory-fs.ts` +- `packages/core/test/kernel/inode-table.test.ts` +- `packages/nodejs/src/bridge-handlers.ts` +- `packages/nodejs/test/kernel-resource-bridge.test.ts` +- `scripts/ralph/prd.json` +- `scripts/ralph/progress.txt` +- Quality checks: `pnpm tsc --noEmit -p packages/core/tsconfig.json` passed; `pnpm vitest run packages/core/test/kernel/inode-table.test.ts` passed; `pnpm tsc --noEmit -p packages/nodejs/tsconfig.json` passed; `pnpm vitest run packages/nodejs/test/kernel-resource-bridge.test.ts` passed; extra integration check `pnpm vitest run packages/secure-exec/tests/kernel/vfs-consistency.test.ts` failed in pre-existing cross-runtime VFS coverage (`expected '' to contain 'hello'` in `kernel write visible to Node`) +- **Learnings for future iterations:** +- Patterns discovered +- `VirtualDirEntry` can grow optional metadata like `ino` without disturbing existing bridge consumers, as long as Node-facing code still only depends on `name` and `isDirectory` +- POSIX-style directory enumeration and Node `fs.readdir()` have different expectations for `.` / `..`; normalize that difference at the Node bridge boundary, not in the shared VFS +- Gotchas encountered +- Adding `.` / `..` at the VFS layer would leak into sandbox Node `fs.readdir()` unless `buildFsBridgeHandlers()` filters them before serializing directory entries +- Useful context +- Story-local green checks are the focused `packages/core` and `packages/nodejs` typecheck/test commands above; `packages/secure-exec/tests/kernel/vfs-consistency.test.ts` is still failing outside this change path and needs separate debugging +--- + +## 2026-03-24 22:28 PDT - US-052 +- What was implemented +- Added `writeWaiters`-backed blocking pipe writes in `PipeManager`, with bounded partial-progress writes, `O_NONBLOCK` handling, and wakeups on buffer drain and endpoint close +- Added focused pipe tests for full-buffer blocking, non-blocking `EAGAIN`, partial-write continuation, and blocked-writer `EPIPE` on read-end close +- Updated the kernel contract for blocking pipe write semantics and added the missing kernel `O_NONBLOCK` flag constant used by pipe descriptions +- Files changed +- `AGENTS.md` +- `.agent/contracts/kernel.md` +- `packages/core/src/kernel/pipe-manager.ts` +- `packages/core/src/kernel/types.ts` +- `packages/core/test/kernel/pipe-manager.test.ts` +- `packages/core/test/kernel/resource-exhaustion.test.ts` +- `scripts/ralph/prd.json` +- `scripts/ralph/progress.txt` +- **Learnings for future iterations:** +- Patterns discovered +- Bounded blocking writes should preserve partial progress: fill the remaining buffer capacity first, then wait only for the unwritten tail +- Pipe producer waits need wakeups from both successful reads and close/error paths, or blocked writers can hang forever after the consumer disappears +- Gotchas encountered +- `KernelInterface.fdWrite()` already allows `number | Promise`, so pipe writes can become async without widening the kernel interface +- Useful context +- Focused green checks for this story were `pnpm vitest run packages/core/test/kernel/pipe-manager.test.ts packages/core/test/kernel/resource-exhaustion.test.ts` and `pnpm tsc --noEmit` in `packages/core` +--- + +## 2026-03-24 22:42 PDT - US-053 +- What was implemented +- Added pipe poll wait queues in `PipeManager` plus a kernel-only `fdPollWait` helper so `poll()` can sleep on pipe state changes instead of spinning or timing out spuriously +- Refactored WasmVM `netPoll` to re-check all FDs in a loop, using finite timeout budgets for bounded polls and repeated `RPC_WAIT_TIMEOUT_MS` chunks for `timeout=-1` +- Updated the WasmVM worker RPC path so `netPoll` with `timeout < 0` keeps waiting across the worker's 30s guard timeout instead of returning `EIO` +- Added a pipe-backed WasmVM regression test that blocks on `poll(-1)`, writes to the pipe asynchronously, and verifies `POLLIN` wakes the poller +- Files changed +- `packages/core/src/kernel/kernel.ts` +- `packages/core/src/kernel/pipe-manager.ts` +- `packages/wasmvm/src/driver.ts` +- `packages/wasmvm/src/kernel-worker.ts` +- `packages/wasmvm/test/net-socket.test.ts` +- `scripts/ralph/prd.json` +- `scripts/ralph/progress.txt` +- Quality checks: `pnpm turbo run build --filter=@secure-exec/core --filter=@secure-exec/wasmvm` passed; `pnpm tsc --noEmit -p packages/core/tsconfig.json` passed; `pnpm tsc --noEmit -p packages/wasmvm/tsconfig.json` passed; `pnpm vitest run packages/wasmvm/test/net-socket.test.ts` passed +- **Learnings for future iterations:** +- Patterns discovered +- Cross-package WasmVM tests that import `@secure-exec/core` need the package rebuilt first or they will run stale `dist` code and miss new kernel behavior +- Pipe-backed `poll()` support works best as a generic state-change queue: wake it on writes, drains, and closes, then let the caller re-run `fdPoll()` to compute exact readiness bits +- Gotchas encountered +- Fixing `poll(-1)` only in the main-thread driver is insufficient because the worker RPC layer has its own 30s `Atomics.wait()` guard; indefinite polls need both sides to cooperate +- Useful context +- The new regression coverage lives in `packages/wasmvm/test/net-socket.test.ts` and exercises the private `_handleSyscall('netPoll')` path with a mock kernel pipe, which is enough to validate the wait/wake integration without running a full WASM program +--- + +## 2026-03-24 23:01 PDT - US-054 +- What was implemented +- Added a read-only proc pseudo-filesystem in `packages/core/src/kernel/proc-layer.ts` and mounted it during kernel init so `/proc//{fd,cwd,exe,environ}` is generated from live `ProcessTable` and `FDTableManager` state +- Added shared `/proc/self` resolution helpers and wired them into the Node kernel runtime VFS and WasmVM VFS RPC path so sandboxed processes see their own `/proc/self/*` +- Added kernel integration coverage for `/proc/self/fd` listings, `/proc/self/fd/0` readlink, `/proc/self/cwd` reads, and `/proc//environ`, then updated the kernel contract for procfs behavior +- Files changed +- `.agent/contracts/kernel.md` +- `packages/core/src/index.ts` +- `packages/core/src/kernel/index.ts` +- `packages/core/src/kernel/kernel.ts` +- `packages/core/src/kernel/proc-layer.ts` +- `packages/core/test/kernel/kernel-integration.test.ts` +- `packages/nodejs/src/kernel-runtime.ts` +- `packages/wasmvm/src/driver.ts` +- **Learnings for future iterations:** +- Patterns discovered +- The shared kernel VFS cannot infer a “current process”, so pseudo-filesystems with self-references need a split design: dynamic `/proc/` entries in core and thin runtime-side `/proc/self` rewriting where PID context exists +- Gotchas encountered +- Cross-package `@secure-exec/core` imports in `@secure-exec/nodejs` and `@secure-exec/wasmvm` typechecks will read stale exports until `pnpm turbo run build --filter=@secure-exec/core` refreshes the core package output +- Useful context +- Focused green checks for this story were `pnpm turbo run build --filter=@secure-exec/core --filter=@secure-exec/nodejs --filter=@secure-exec/wasmvm`, `pnpm tsc --noEmit -p packages/core/tsconfig.json`, `pnpm tsc --noEmit -p packages/nodejs/tsconfig.json`, `pnpm tsc --noEmit -p packages/wasmvm/tsconfig.json`, and `pnpm vitest run packages/core/test/kernel/kernel-integration.test.ts -t "/proc pseudo-filesystem"` +--- + +## 2026-03-24 23:05 PDT - US-055 +- Implemented `SA_RESETHAND` support in the kernel signal types and exports, and reset one-shot handlers to default disposition after their first delivery +- Updated `ProcessTable` signal dispatch so `SA_RESETHAND | SA_RESTART` both work, with the reset happening before pending signals are re-delivered +- Added kernel signal tests covering one-shot handler reset, second-delivery default action, and `SA_RESETHAND | SA_RESTART` restart behavior +- Files changed: `.agent/contracts/kernel.md`, `packages/core/src/kernel/index.ts`, `packages/core/src/kernel/process-table.ts`, `packages/core/src/kernel/types.ts`, `packages/core/test/kernel/signal-handlers.test.ts`, `scripts/ralph/prd.json`, `scripts/ralph/progress.txt` +- **Learnings for future iterations:** + - One-shot signal reset ordering matters: update the handler disposition before `deliverPendingSignals()` so a same-signal pending delivery does not invoke the old callback twice + - `ProcessTable.dispatchSignal()` records delivery flags before running the user handler, so combined flags like `SA_RESETHAND | SA_RESTART` can affect both the interrupted syscall and the post-handler disposition reset + - Kernel signal behavior is contract-backed in `.agent/contracts/kernel.md`; signal semantic changes should update that contract alongside the code +--- + +## 2026-03-24 23:26 PDT - US-056 +- What was implemented +- Finished the remaining Node.js ESM parity gap by propagating async entrypoint promise rejections out of the native V8 runtime, fixing dynamic import missing-module/syntax/evaluation failures to produce non-zero exec results, and making dynamic import resolution use `"import"` conditions without breaking `require()` condition routing +- Regenerated the isolate-runtime bundle, updated the Node runtime contract and compatibility/friction docs to record the corrected ESM behavior, and marked the story complete in the PRD +- Files changed +- `.agent/contracts/node-runtime.md` +- `docs-internal/friction.md` +- `docs/nodejs-compatibility.mdx` +- `native/v8-runtime/src/execution.rs` +- `native/v8-runtime/src/isolate.rs` +- `native/v8-runtime/src/snapshot.rs` +- `packages/core/isolate-runtime/src/inject/setup-dynamic-import.ts` +- `packages/core/src/generated/isolate-runtime.ts` +- `packages/nodejs/src/bridge-handlers.ts` +- `scripts/ralph/prd.json` +- `scripts/ralph/progress.txt` +- **Learnings for future iterations:** + - Patterns discovered + - Native V8 runtime package tests use the release binary when it exists, so native runtime changes need a release rebuild or the focused Vitest slice will keep exercising stale host code + - Isolate-runtime source changes only take effect in package tests after regenerating `packages/core/src/generated/isolate-runtime.ts` + - Gotchas encountered + - Arrow-function bridge handlers do not provide a safe `arguments` object for extra dispatch parameters; accept optional bridge args explicitly when resolution mode needs to cross the boundary + - Useful context + - Focused validation for this story passed with `cargo test execution::tests::v8_consolidated_tests -- --nocapture`, `pnpm turbo run build --filter=@secure-exec/core --filter=@secure-exec/nodejs --filter=secure-exec`, `pnpm run check-types` in `packages/core`, `packages/nodejs`, and `packages/secure-exec`, plus `pnpm exec vitest run packages/secure-exec/tests/runtime-driver/node/index.test.ts -t "dynamic import|built-in ESM imports|package exports|type module"` +--- diff --git a/scripts/ralph/prd.json b/scripts/ralph/prd.json index f027053f..4111221a 100644 --- a/scripts/ralph/prd.json +++ b/scripts/ralph/prd.json @@ -1,1007 +1,946 @@ { "project": "SecureExec", - "branchName": "ralph/kernel-consolidation", - "description": "Kernel Consolidation - Move networking, resource management, and runtime-specific subsystems into the shared kernel so Node.js and WasmVM share the same socket table, port registry, and network stack.", + "branchName": "ralph/nodejs-conformance-fixes", + "description": "Node.js Conformance Test Fixes \u2014 systematically fix bridge/polyfill gaps to maximize pass rate across crypto, http, net, tls, https, dgram, and http2 modules.", "userStories": [ { "id": "US-001", - "title": "Implement WaitHandle and WaitQueue primitives (K-10)", - "description": "As a developer, I need unified blocking I/O primitives so that all kernel subsystems (pipes, sockets, flock, poll) share the same wait/wake mechanism.", + "title": "Fix crypto KeyObject metadata (asymmetricKeyType, asymmetricKeyDetails)", + "description": "As a developer, I need KeyObject instances returned by generateKeyPair to have correct metadata properties.", "acceptanceCriteria": [ - "Add packages/core/src/kernel/wait.ts with WaitHandle and WaitQueue classes", - "WaitHandle.wait(timeoutMs?) returns a Promise that resolves when woken or times out", - "WaitHandle.wake() resolves exactly one waiter", - "WaitQueue.wakeAll() resolves all enqueued waiters", - "WaitQueue.wakeOne() resolves exactly one waiter (FIFO order)", - "Add packages/core/test/kernel/wait-queue.test.ts with tests: wake resolves wait, timeout fires, wakeOne wakes one, wakeAll wakes all", + "KeyObject.asymmetricKeyType returns correct type string (rsa, ec, ed25519, ed448, rsa-pss, dsa, dh, x25519, x448)", + "KeyObject.asymmetricKeyDetails returns correct details object (modulusLength, publicExponent for RSA; namedCurve for EC)", + "KeyObject.toCryptoKey() converts to WebCrypto CryptoKey", + "KeyObject.export() with JWK format returns parsed object, not JSON string", + "Run conformance: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts -t 'node/crypto' 2>&1 | grep -c 'now passes'", + "Remove expectations.json entries for tests that now pass", "Tests pass", "Typecheck passes" ], "priority": 1, "passes": true, - "notes": "Foundation for all blocking I/O. See spec section 2.4. Keep it simple — just Promise-based wait/wake, no Atomics yet." + "notes": "~11 tests fail on missing asymmetricKeyType/asymmetricKeyDetails. Bridge handler for generateKeyPair needs to populate these from the host crypto result. Check bridge-handlers.ts for crypto dispatch." }, { "id": "US-002", - "title": "Implement InodeTable with refcounting and deferred unlink (K-11)", - "description": "As a developer, I need an inode layer so the VFS supports hard links, deferred deletion, and correct stat() metadata.", + "title": "Fix crypto key generation (generateKeyPair for ed25519/ed448, generateKey symmetric)", + "description": "As a developer, I need key generation to work for all key types including EdDSA curves and symmetric keys.", "acceptanceCriteria": [ - "Add packages/core/src/kernel/inode-table.ts with Inode and InodeTable classes", - "InodeTable.allocate(mode, uid, gid) returns Inode with unique ino number", - "incrementLinks/decrementLinks track hard link count (nlink)", - "incrementOpenRefs/decrementOpenRefs track open FD count", - "shouldDelete(ino) returns true when nlink=0 AND openRefCount=0", - "Deferred deletion: unlink with open FDs keeps data until last FD closes", - "Add packages/core/test/kernel/inode-table.test.ts with tests: allocate unique ino, hard link increments nlink, unlink-with-open-FD persists, close-last-FD deletes", + "crypto.generateKeyPair('ed25519', ...) invokes callback with valid key pair", + "crypto.generateKeyPair('ed448', ...) invokes callback with valid key pair", + "crypto.generateKeyPair with encrypted PEM/DER output returns valid encrypted key", + "crypto.generateKey('aes', { length: 256 }, ...) generates symmetric key", + "crypto.generatePrime() returns valid prime as Buffer", + "Run conformance: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts -t 'node/crypto' \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", "Tests pass", "Typecheck passes" ], "priority": 2, "passes": true, - "notes": "See spec section 2.5. Not wired to VFS yet — standalone table only." + "notes": "~13 tests fail on key generation. generateKeyPair ed25519/ed448 callback not invoked. JWK output as JSON string not parsed object. DSA parameter validation too strict." }, { "id": "US-003", - "title": "Implement HostNetworkAdapter interface (Part 5)", - "description": "As a developer, I need a host adapter interface so the kernel can delegate external I/O to the host without knowing the host implementation.", + "title": "Fix crypto DH/ECDH key agreement", + "description": "As a developer, I need Diffie-Hellman and ECDH key agreement to produce correct shared secrets.", "acceptanceCriteria": [ - "Add HostNetworkAdapter, HostSocket, HostListener, HostUdpSocket interfaces to packages/core/src/types.ts or packages/core/src/kernel/host-adapter.ts", - "HostNetworkAdapter has: tcpConnect, tcpListen, udpBind, udpSend, dnsLookup methods", - "HostSocket has: write, read (null=EOF), close, setOption, shutdown methods", - "HostListener has: accept, close, port (readonly) members", - "HostUdpSocket has: recv, close methods", + "crypto.createDiffieHellman() returns working DH object", + "DiffieHellman.generateKeys() returns Buffer (not undefined)", + "DiffieHellman.computeSecret() returns correct shared secret Buffer", + "crypto.createECDH('secp256k1') and other curves work", + "ECDH.generateKeys() and computeSecret() produce correct results", + "crypto.diffieHellman({ privateKey, publicKey }) stateless function works", + "Buffer encoding parameter ('hex', 'base64') works for all DH methods", + "Run conformance for crypto module \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", + "Tests pass", "Typecheck passes" ], "priority": 3, "passes": true, - "notes": "See spec Part 5. Interfaces only — no implementations yet. Node.js driver will implement these later." + "notes": "9 tests fail. DH.generateKeys() returns undefined, computeSecret() returns undefined, buffer encoding not supported, stateless diffieHellman() not implemented." }, { "id": "US-004", - "title": "Implement KernelSocket and SocketTable core (K-1)", - "description": "As a developer, I need a virtual socket table in the kernel so sockets can be created, tracked, and closed with proper state transitions.", + "title": "Implement crypto.subtle WebCrypto bridge (basic operations)", + "description": "As a developer, I need the WebCrypto subtle API to be bridged to the host for core operations.", "acceptanceCriteria": [ - "Add packages/core/src/kernel/socket-table.ts with KernelSocket struct and SocketTable class", - "KernelSocket has: id, domain (AF_INET/AF_INET6/AF_UNIX), type (SOCK_STREAM/SOCK_DGRAM), state, nonBlocking, localAddr, remoteAddr, options Map, pid, readBuffer, readWaiters (WaitQueue), backlog, acceptWaiters (WaitQueue)", - "SocketTable.create(domain, type, protocol, pid) returns socket ID, tracks in sockets Map", - "SocketTable.close(socketId) removes socket and frees resources", - "SocketTable.poll(socketId) returns { readable, writable, hangup }", - "Per-process isolation: process A cannot close process B's socket", - "EMFILE error when creating too many sockets (configurable limit)", - "Add packages/core/test/kernel/socket-table.test.ts with tests: create socket, state transitions, close frees resources, EMFILE limit, per-process isolation", + "crypto.subtle is available and is a SubtleCrypto instance", + "crypto.subtle.digest('SHA-256', data) returns ArrayBuffer", + "crypto.subtle.importKey() supports raw, pkcs8, spki, jwk formats", + "crypto.subtle.exportKey() supports raw, pkcs8, spki, jwk formats", + "crypto.subtle.encrypt() and decrypt() work for AES-GCM, AES-CBC, RSA-OAEP", + "crypto.getRandomValues() works for TypedArrays", + "crypto.randomUUID() returns valid UUID string", + "Run conformance for crypto module \u2014 check newly passing webcrypto tests", + "Remove expectations.json entries for tests that now pass", "Tests pass", "Typecheck passes" ], "priority": 4, "passes": true, - "notes": "See spec section 1.1. Does not include bind/listen/connect yet — just create/close/poll lifecycle." + "notes": "~15 webcrypto tests. Bridge crypto.subtle operations through host dispatch. Check if globalThis.crypto is already exposed or needs wiring." }, { "id": "US-005", - "title": "Add bind, listen, accept to SocketTable (K-1, K-3)", - "description": "As a developer, I need server socket operations so the kernel can manage port listeners and accept connections.", + "title": "Implement crypto.subtle sign/verify and key derivation", + "description": "As a developer, I need WebCrypto sign/verify and key derivation operations.", "acceptanceCriteria": [ - "SocketTable.bind(socketId, addr) sets localAddr, registers in listeners Map, transitions to 'bound'", - "SocketTable.listen(socketId, backlog) transitions to 'listening'", - "SocketTable.accept(socketId) returns pending connection or null (EAGAIN)", - "Bind to already-used port returns EADDRINUSE (unless SO_REUSEADDR is set)", - "Close listener frees the port for reuse", - "Wildcard address matching: listener on '0.0.0.0:8080' matches connect to '127.0.0.1:8080'", - "Add tests to socket-table.test.ts: bind/listen/accept lifecycle, EADDRINUSE, port reuse after close, wildcard matching", + "crypto.subtle.sign() works for RSASSA-PKCS1-v1_5, RSA-PSS, ECDSA, HMAC, Ed25519", + "crypto.subtle.verify() works for all corresponding algorithms", + "crypto.subtle.generateKey() generates key pairs and symmetric keys", + "crypto.subtle.deriveKey() and deriveBits() work for HKDF, PBKDF2, ECDH", + "crypto.subtle.wrapKey() and unwrapKey() work for AES-KW", + "Run conformance for crypto module \u2014 check newly passing webcrypto tests", + "Remove expectations.json entries for tests that now pass", "Tests pass", "Typecheck passes" ], "priority": 5, "passes": true, - "notes": "See spec sections 1.1 and 1.3. Builds on US-004." + "notes": "~15 webcrypto tests for sign/verify/derive/wrap. These bridge through to host WebCrypto API." }, { "id": "US-006", - "title": "Implement loopback routing for TCP (K-2)", - "description": "As a developer, I need in-kernel loopback routing so that connect() to a kernel-owned port creates paired sockets without real TCP.", + "title": "Fix Cipher/Decipher streaming and crypto error codes", + "description": "As a developer, I need Cipher/Decipher to work as Transform streams and crypto functions to throw proper ERR_* codes.", "acceptanceCriteria": [ - "SocketTable.connect(socketId, addr) checks if addr matches a kernel listener", - "If loopback: creates socketpair — client socket returned, server socket queued in listener backlog", - "Data written to client side is buffered in server's readBuffer (and vice versa) like pipes", - "accept() returns the server-side socket from the backlog", - "send(socketId, data, flags) writes to peer's readBuffer and wakes readWaiters", - "recv(socketId, maxBytes, flags) reads from own readBuffer, returns null if empty and non-blocking", - "Close client → server gets EOF (recv returns null). Close server → client gets EOF", - "Add packages/core/test/kernel/loopback.test.ts: connect to listener, exchange data bidirectionally, close propagates EOF, loopback never calls host adapter", + "crypto.createCipheriv() returns object that extends Transform stream (supports .pipe(), .on('data'))", + "crypto.createDecipheriv() returns object that extends Transform stream", + "crypto.createHash() returns object that extends Transform stream (supports .pipe())", + "CCM cipher mode with authTagLength parameter works", + "crypto.pbkdf2() throws ERR_INVALID_ARG_TYPE for invalid arguments (not plain TypeError)", + "crypto.publicEncrypt() returns Buffer (not undefined)", + "crypto.privateDecrypt() returns Buffer (not undefined)", + "Run conformance for crypto module \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", "Tests pass", "Typecheck passes" ], "priority": 6, "passes": true, - "notes": "See spec section 1.2. If addr does not match a kernel listener, connect() should throw/error for now (external routing added later)." + "notes": "4 Cipher/Decipher streaming tests + ~18 error code tests. Hash/Cipher objects need to extend Transform. Error validation needs ERR_* codes instead of plain TypeError." }, { "id": "US-007", - "title": "Add shutdown() and half-close support (K-1)", - "description": "As a developer, I need TCP half-close so that shutdown(SHUT_WR) sends EOF to the peer without closing the socket.", + "title": "Update crypto expectations and regenerate conformance report", + "description": "As a developer, I need the crypto conformance expectations cleaned up and the report regenerated.", "acceptanceCriteria": [ - "SocketTable.shutdown(socketId, 'read' | 'write' | 'both') updates socket state", - "shutdown('write') transitions to 'write-closed' — peer recv() gets EOF, but local recv() still works", - "shutdown('read') transitions to 'read-closed' — local recv() returns EOF immediately", - "shutdown('both') transitions to 'closed'", - "send() on write-closed socket returns EPIPE", - "Add packages/core/test/kernel/socket-shutdown.test.ts: half-close write, half-close read, full shutdown, EPIPE on write-closed", + "Run full crypto conformance suite: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts -t 'node/crypto'", + "All genuinely passing tests have their expectations removed from expectations.json", + "Remaining failures have specific, accurate reasons (not vague categories)", + "Vacuous self-skip tests (common.hasCrypto=false) are marked vacuous-skip with reason", + "Tests requiring --expose-internals are marked requires-v8-flags", + "Regenerate report: pnpm tsx scripts/generate-node-conformance-report.ts", "Tests pass", "Typecheck passes" ], "priority": 7, "passes": true, - "notes": "See spec section 1.1 (read-closed/write-closed states) and shutdown semantics." + "notes": "Cleanup story. ~13 tests will remain as vacuous-skip, ~8 as requires-v8-flags, ~4 as unsupported-module. All fixable tests should be passing by now." }, { "id": "US-008", - "title": "Add socketpair() support (K-1, K-5)", - "description": "As a developer, I need socketpair() so that two connected sockets can be created atomically for IPC.", + "title": "Fix bridge bootstrap so NodeRuntime works outside vitest", + "description": "As a developer, I need NodeRuntime.run() and NodeRuntime.exec() to work from standalone Node.js scripts, not only inside vitest.", "acceptanceCriteria": [ - "SocketTable.socketpair(domain, type, protocol, pid) returns [socketId1, socketId2]", - "Both sockets are pre-connected — data written to one appears in the other's readBuffer", - "Close one side delivers EOF to the other", - "Works for AF_UNIX + SOCK_STREAM", - "Add tests: create pair, exchange data, close one side delivers EOF", + "Identify why the bridge IIFE (isolate-runtime.ts) fails to inject CJS globals (require, module, exports, process, console, Buffer) when NodeRuntime is instantiated from a standalone script via dist/ output", + "Fix the bootstrap so that `new NodeRuntime({ systemDriver: createNodeDriver(), runtimeDriverFactory: createNodeRuntimeDriverFactory() })` followed by `runtime.exec('console.log(1)')` works from a standalone `node --input-type=module -e '...'` script", + "Verify: `runtime.exec('console.log(\"hello\")')` completes with code 0 and the onStdio hook receives 'hello'", + "Verify: `runtime.exec('const fs = require(\"node:fs\"); console.log(typeof fs.readFileSync)')` completes with code 0 (require works)", + "Add a smoke test in packages/secure-exec/tests/ that imports from dist/ (not source) and runs a basic exec \u2014 this prevents future regressions", + "Verify kernel path: kernel.spawn('node', ['-e', 'console.log(1)'], { onStdout }) captures '1' \u2014 same bridge bootstrap, just through kernel dispatch", "Tests pass", "Typecheck passes" ], "priority": 8, "passes": true, - "notes": "See spec section 1.1 (socketpair) and 1.5 (Unix domain sockets). Reuses loopback data path from US-006." + "notes": "RELEASE BLOCKER. The bridge IIFE that injects CJS globals crashes during bootstrap when run outside vitest. Even `1+1` fails because the bootstrap itself references require. The vitest transform pipeline does something (likely TypeScript path resolution or module format handling) that makes it work. Check packages/core/src/generated/isolate-runtime.ts and how it's loaded by the runtime driver." }, { "id": "US-009", - "title": "Add socket options support (K-6)", - "description": "As a developer, I need setsockopt/getsockopt so kernel sockets can be configured with SO_REUSEADDR, TCP_NODELAY, etc.", + "title": "Fix module.exports capture in NodeRuntime.run()", + "description": "As a developer, I need runtime.run() to return the module's exports so I can get typed return values from sandbox code.", "acceptanceCriteria": [ - "SocketTable.setsockopt(socketId, level, optname, optval) stores option in socket's options Map", - "SocketTable.getsockopt(socketId, level, optname) retrieves option value", - "SO_REUSEADDR is enforced by bind() (already in US-005 — verify integration)", - "SO_RCVBUF / SO_SNDBUF set kernel buffer size limits", - "Add to socket-table.test.ts: set SO_REUSEADDR allows port reuse, set SO_RCVBUF enforces buffer limit", + "runtime.run('module.exports = { message: \"hello\" }') returns result with exports.message === 'hello'", + "runtime.run('module.exports = 42') returns result with exports === 42", + "runtime.run('export const answer = 42', '/entry.mjs') returns result with exports.answer === 42 (ESM mode)", + "runtime.run('module.exports = { a: 1, b: [2,3] }') preserves nested structures in exports", + "Verify from standalone script (not vitest): node --input-type=module -e 'import { NodeRuntime, createNodeDriver, createNodeRuntimeDriverFactory } from \"./packages/secure-exec/dist/index.js\"; ...'", "Tests pass", "Typecheck passes" ], "priority": 9, "passes": true, - "notes": "See spec section 1.6. For loopback sockets most options are kernel-enforced. For external sockets, options are forwarded to host adapter (later)." + "notes": "runtime.run() completes with exit code 0 but result.exports is always undefined. The export extraction in runtimeDriver.run() isn't capturing module.exports. May be related to US-008 bridge bootstrap issue \u2014 if require is undefined, module.exports assignment can't work either." }, { "id": "US-010", - "title": "Add socket flags: MSG_PEEK, MSG_DONTWAIT, MSG_NOSIGNAL (K-1)", - "description": "As a developer, I need socket send/recv flags so code can peek at data or do non-blocking one-off operations.", + "title": "Fix HTTP agent keepalive and connection management", + "description": "As a developer, I need HTTP agent connection pooling and keepalive to match Node.js behavior.", "acceptanceCriteria": [ - "recv() with MSG_PEEK reads data without consuming it from readBuffer", - "recv() with MSG_DONTWAIT returns EAGAIN if no data (even on blocking socket)", - "send() with MSG_NOSIGNAL returns EPIPE instead of raising SIGPIPE on broken connection", - "Add packages/core/test/kernel/socket-flags.test.ts: MSG_PEEK leaves data in buffer, MSG_DONTWAIT returns EAGAIN, MSG_NOSIGNAL suppresses SIGPIPE", + "http.Agent with keepAlive:true reuses connections for subsequent requests to same host:port", + "Agent.maxSockets limits concurrent connections per origin", + "Agent.maxTotalSockets limits total concurrent connections across all origins", + "Agent.maxFreeSockets limits idle connections in pool", + "Agent keepalive timeout closes idle connections after msecs", + "Agent.getName() returns correct key for connection pooling", + "Run conformance: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts -t 'node/http' \u2014 check newly passing agent tests", + "Remove expectations.json entries for tests that now pass", "Tests pass", "Typecheck passes" ], "priority": 10, - "passes": true, - "notes": "See spec section 1.1 flags comments." + "passes": false, + "notes": "~8 agent tests fail: test-http-agent-keepalive.js, test-http-agent-maxsockets.js, test-http-agent-maxtotalsockets.js, test-http-agent-destroyed-socket.js, etc." }, { "id": "US-011", - "title": "Implement network permissions in kernel (K-7)", - "description": "As a developer, I need kernel-level network permission checks so all socket operations go through deny-by-default policy.", + "title": "Fix HTTP server edge cases (CONNECT, upgrade, 1xx responses)", + "description": "As a developer, I need HTTP server to handle CONNECT method, WebSocket upgrades, and informational responses.", "acceptanceCriteria": [ - "Add Kernel.checkNetworkPermission(op, addr) method", - "connect() to external addresses checks permission — EACCES if denied", - "listen() checks permission — EACCES if denied", - "send() to external addresses checks permission — EACCES if denied", - "Loopback connections (to kernel-owned ports) are always allowed regardless of policy", - "Add packages/core/test/kernel/network-permissions.test.ts: deny-by-default blocks external, allow-list permits specific hosts, loopback always allowed", + "server.on('connect') fires for HTTP CONNECT requests with correct (req, socket, head) args", + "server.on('upgrade') fires for WebSocket upgrade requests", + "response.writeHead(100) sends HTTP 100 Continue informational response", + "response.writeHead(103) sends HTTP 103 Early Hints", + "response.writeProcessing() sends 102 Processing", + "Run conformance for http module \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", "Tests pass", "Typecheck passes" ], "priority": 11, - "passes": true, - "notes": "See spec section 1.7. Replaces scattered SSRF validation in driver.ts." + "passes": false, + "notes": "test-http-after-connect.js, test-http-information-headers.js, test-http-upgrade-*.js. CONNECT and upgrade need socket tunneling support." }, { "id": "US-012", - "title": "Add external connection routing via host adapter", - "description": "As a developer, I need connect() to external addresses to route through the host adapter so the kernel can reach the real network.", + "title": "Fix HTTP client timeout, abort, and error handling", + "description": "As a developer, I need HTTP client requests to handle timeouts, aborts, and errors correctly.", "acceptanceCriteria": [ - "SocketTable.connect() for non-loopback addresses calls hostAdapter.tcpConnect(host, port)", - "Data relay: send() on kernel socket writes to HostSocket, HostSocket.read() feeds kernel readBuffer", - "close() on kernel socket calls HostSocket.close()", - "Permission check via kernel.checkNetworkPermission() before host adapter call", - "Add a mock HostNetworkAdapter for testing", - "Add tests: connect to external via mock adapter, data flows through, close propagates", + "request.setTimeout(ms) fires 'timeout' event after ms milliseconds", + "request.abort() triggers 'abort' event and closes underlying socket", + "AbortController signal aborts in-flight requests", + "Socket errors propagate as 'error' events on the request", + "request.destroy() immediately terminates the request", + "Run conformance for http module \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", "Tests pass", "Typecheck passes" ], "priority": 12, - "passes": true, - "notes": "Wires the host adapter interface (US-003) to the socket table. Uses mock adapter in tests." + "passes": false, + "notes": "test-http-client-timeout-option.js, test-http-abort-client.js, test-http-client-abort-*.js. Timeout and abort lifecycle events." }, { "id": "US-013", - "title": "Add external server socket routing via host adapter", - "description": "As a developer, I need listen() to optionally create real TCP listeners via the host adapter for external-facing servers.", + "title": "Fix HTTP header validation and method handling", + "description": "As a developer, I need HTTP header validation and method listing to match Node.js behavior.", "acceptanceCriteria": [ - "When listen() is called with an external-facing flag, kernel calls hostAdapter.tcpListen(host, port)", - "HostListener.accept() feeds new kernel sockets into the listener's backlog", - "HostListener.port returns the actual bound port (for port 0 ephemeral ports)", - "close() on listener calls HostListener.close()", - "Add tests with mock adapter: external listen, accept incoming, exchange data, close", + "http.METHODS array contains all standard HTTP methods", + "Invalid header characters throw ERR_INVALID_HTTP_TOKEN", + "Invalid path characters throw ERR_UNESCAPED_CHARACTERS or equivalent", + "Header names are validated per RFC 7230", + "Duplicate headers are handled correctly (set-cookie arrays, comma-join others)", + "Run conformance for http module \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", "Tests pass", "Typecheck passes" ], "priority": 13, - "passes": true, - "notes": "Needed for http.createServer() to accept real TCP connections from outside the sandbox." + "passes": false, + "notes": "test-http-methods.js, test-http-client-invalid-path.js, test-http-invalidheaderfield2.js, test-http-invalid-path-chars.js." }, { "id": "US-014", - "title": "Implement UDP sockets in kernel (K-4)", - "description": "As a developer, I need SOCK_DGRAM support so the kernel handles UDP send/recv with message boundary preservation.", + "title": "Fix HTTP pipeline, chunked encoding, and transfer edge cases", + "description": "As a developer, I need HTTP pipelining and transfer encoding to work correctly.", "acceptanceCriteria": [ - "SocketTable.create() with SOCK_DGRAM type creates a datagram socket", - "sendTo(socketId, data, flags, destAddr) sends to specific address", - "recvFrom(socketId, maxBytes, flags) returns { data, srcAddr }", - "Loopback UDP: sendTo a kernel-bound UDP port delivers to that socket's readBuffer", - "Message boundaries preserved: two 100-byte sends produce two 100-byte recvs", - "Send to unbound port is silently dropped (UDP semantics)", - "External UDP routes through hostAdapter.udpBind/udpSend", - "Add packages/core/test/kernel/udp-socket.test.ts: loopback dgram, message boundaries, silent drop, external routing via mock", + "HTTP pipelining: multiple requests on same connection are handled sequentially", + "Chunked transfer encoding: server sends chunked responses correctly", + "Transfer-Encoding and Content-Length interaction matches Node.js behavior", + "response.write() with cork/uncork batches data correctly", + "Trailer headers sent after chunked body", + "Run conformance for http module \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", "Tests pass", "Typecheck passes" ], "priority": 14, - "passes": true, - "notes": "See spec section 1.4. Max datagram 65535 bytes, max queue depth 128." + "passes": false, + "notes": "test-http-pipeline-*.js, test-http-chunk-problem.js, test-http-transfer-encoding-*.js, test-http-response-cork.js." }, { "id": "US-015", - "title": "Implement Unix domain sockets in kernel (K-5)", - "description": "As a developer, I need AF_UNIX sockets so processes can communicate via VFS paths.", + "title": "Update HTTP expectations and regenerate conformance report", + "description": "As a developer, I need HTTP conformance expectations cleaned up after fixes.", "acceptanceCriteria": [ - "bind(socketId, { path: '/tmp/my.sock' }) creates a socket file in the VFS", - "connect(socketId, { path: '/tmp/my.sock' }) connects to the bound socket via kernel", - "Always in-kernel routing (no host adapter)", - "Support both SOCK_STREAM and SOCK_DGRAM modes", - "stat() on socket path returns socket file type", - "Bind to existing path returns EADDRINUSE", - "Remove socket file → new connections fail with ECONNREFUSED", - "Add packages/core/test/kernel/unix-socket.test.ts: bind/connect/exchange data, socket file in VFS, EADDRINUSE, ECONNREFUSED after unlink", + "Run full http conformance: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts -t 'node/http'", + "All genuinely passing tests have expectations removed", + "Remaining failures have specific reasons", + "Tests requiring --expose-internals or process.execPath are properly categorized", + "Regenerate report: pnpm tsx scripts/generate-node-conformance-report.ts", "Tests pass", "Typecheck passes" ], "priority": 15, - "passes": true, - "notes": "See spec section 1.5. Requires VFS integration for socket file entries." + "passes": false, + "notes": "Cleanup story for HTTP module. ~11 tests need --expose-internals, ~4 need execPath \u2014 these stay as expected failures." }, { "id": "US-016", - "title": "Expose SocketTable on KernelImpl", - "description": "As a developer, I need the socket table accessible from KernelImpl so runtimes can call kernel.socketTable.*.", + "title": "Fix net.Socket methods (setKeepAlive, setNoDelay, ref/unref, address)", + "description": "As a developer, I need net.Socket to expose all standard methods.", "acceptanceCriteria": [ - "KernelImpl constructor creates a SocketTable instance", - "kernel.socketTable is publicly accessible", - "kernel.dispose() cleans up all sockets", - "Socket creation respects kernel process table (pid must exist)", - "Process exit cleans up all sockets owned by that process", - "Add integration test in existing kernel tests: create kernel, create socket, dispose kernel, verify cleanup", + "socket.setKeepAlive(enable, initialDelay) configures TCP keepalive", + "socket.setNoDelay(noDelay) disables/enables Nagle's algorithm", + "socket.ref() and socket.unref() control event loop reference counting", + "socket.address() returns { port, family, address } for connected socket", + "socket.localAddress and socket.localPort return correct values", + "socket.remoteAddress, socket.remotePort, socket.remoteFamily return correct values", + "Run conformance: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts -t 'node/net' \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", "Tests pass", "Typecheck passes" ], "priority": 16, - "passes": true, - "notes": "Wires socket table into the existing kernel. After this, runtimes can start using kernel sockets." + "passes": false, + "notes": "net.Socket API surface gaps. setKeepAlive and setNoDelay should pass through to kernel socket options via setsockopt. ref/unref control event loop." }, { "id": "US-017", - "title": "Implement kernel TimerTable (N-5, N-8)", - "description": "As a developer, I need a kernel timer table so timer ownership is tracked per-process with budget enforcement.", + "title": "Fix net.Socket events (close, end, error, timeout, drain, connect)", + "description": "As a developer, I need net.Socket to emit all standard events in the correct order.", "acceptanceCriteria": [ - "Add packages/core/src/kernel/timer-table.ts with TimerTable class", - "createTimer(pid, delayMs, repeat, callback) returns timer ID and tracks ownership", - "clearTimer(timerId) cancels and removes timer", - "enforceLimit(pid, maxTimers) throws when budget exceeded", - "clearAllForProcess(pid) removes all timers for a process on exit", - "Timer in process A cannot be cleared by process B", - "Add packages/core/test/kernel/timer-table.test.ts: create/clear, budget enforcement, process cleanup, cross-process isolation", + "'connect' event fires when connection is established", + "'data' event fires with Buffer for each received chunk", + "'end' event fires when remote end sends FIN (half-close)", + "'close' event fires after socket is fully closed (after 'end')", + "'error' event fires for connection errors with proper Error object", + "'timeout' event fires after socket.setTimeout(ms) idle timeout", + "'drain' event fires when write buffer is flushed", + "Event ordering matches Node.js: connect \u2192 data \u2192 end \u2192 close", + "Run conformance for net module \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", "Tests pass", "Typecheck passes" ], "priority": 17, - "passes": true, - "notes": "See spec section 2.1. Host adapter provides actual setTimeout/setInterval scheduling." + "passes": false, + "notes": "Most net test failures are missing events. The kernel socket table handles data transport but the bridge NetSocket class needs proper event emission." }, { "id": "US-018", - "title": "Implement kernel handle table (N-7, N-9)", - "description": "As a developer, I need kernel-level active handle tracking so resource budgets are enforced per-process.", + "title": "Fix net.Server API (getConnections, maxConnections, allowHalfOpen, listen options)", + "description": "As a developer, I need net.Server to support all standard configuration and query methods.", "acceptanceCriteria": [ - "Extend ProcessEntry in kernel process table with activeHandles Map and handleLimit", - "registerHandle(pid, id, description) tracks a handle", - "unregisterHandle(pid, id) removes it", - "Registering beyond handleLimit throws error", - "Process exit cleans up all handles", - "Add tests to existing process table tests: register/unregister, limit enforcement, cleanup on exit", + "server.getConnections(callback) returns current connection count", + "server.maxConnections limits accepted connections", + "net.createServer({ allowHalfOpen: true }) keeps socket readable after remote FIN", + "server.listen({ port, host, backlog, exclusive }) supports all options", + "server.listen(handle) accepts existing socket handle", + "server.close() stops accepting new connections, existing connections finish", + "server.address() returns { port, family, address } after listening", + "Run conformance for net module \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", "Tests pass", "Typecheck passes" ], "priority": 18, - "passes": true, - "notes": "See spec section 2.2. Simple Map-based tracking on existing ProcessEntry." + "passes": false, + "notes": "net.Server configuration gaps. getConnections needs kernel process table query. allowHalfOpen needs shutdown(SHUT_WR) support." }, { "id": "US-019", - "title": "Implement kernel DNS cache (N-10)", - "description": "As a developer, I need a shared DNS cache so both runtimes avoid redundant lookups.", + "title": "Fix net.connect options and edge cases", + "description": "As a developer, I need net.connect to handle all connection options and edge cases.", "acceptanceCriteria": [ - "Add packages/core/src/kernel/dns-cache.ts with DnsCache class", - "lookup(hostname, rrtype) returns cached result or null", - "store(hostname, rrtype, result, ttl) caches with expiry", - "Expired entries return null on lookup", - "flush() clears all entries", - "Add packages/core/test/kernel/dns-cache.test.ts: cache hit, cache miss, TTL expiry, flush", + "net.connect({ port, host }) with explicit host works", + "net.connect({ path }) connects to Unix domain socket", + "net.createConnection() is alias for net.connect()", + "net.isIP(), net.isIPv4(), net.isIPv6() validation functions work", + "socket.destroy() during connection emits close without error", + "Multiple writes before connect queues data (cork behavior)", + "Run conformance for net module \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", "Tests pass", "Typecheck passes" ], "priority": 19, - "passes": true, - "notes": "See spec section 2.3. Runtimes call kernel DNS before host adapter." + "passes": false, + "notes": "net.connect option handling and edge cases. Unix path connects through kernel AF_UNIX sockets." }, { "id": "US-020", - "title": "Implement signal handler registry with sigaction semantics (K-8)", - "description": "As a developer, I need full POSIX signal handling so processes can register handlers with sa_mask and SA_RESTART.", + "title": "Update net expectations and regenerate conformance report", + "description": "As a developer, I need net conformance expectations cleaned up after fixes.", "acceptanceCriteria": [ - "Add SignalHandler and ProcessSignalState types in kernel", - "sigaction(pid, signal, handler, mask, flags) registers handler", - "Signal delivery: 'ignore' discards, 'default' applies kernel action, function invokes handler", - "SA_RESTART: interrupted blocking syscall restarts after handler returns", - "sigprocmask(pid, how, set): SIG_BLOCK/SIG_UNBLOCK/SIG_SETMASK modify blocked signals", - "Signals delivered while blocked are queued in pendingSignals", - "Standard signals (1-31) coalesce: max 1 pending per signal number", - "Add packages/core/test/kernel/signal-handlers.test.ts: register handler, SA_RESTART, sigprocmask block/unblock, coalescing", + "Run full net conformance: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts -t 'node/net'", + "All genuinely passing tests have expectations removed", + "Update the test-net-*.js glob pattern reason to be specific about remaining gaps", + "Regenerate report: pnpm tsx scripts/generate-node-conformance-report.ts", "Tests pass", "Typecheck passes" ], "priority": 20, - "passes": true, - "notes": "See spec section 2.6. Builds on existing kernel signal delivery." + "passes": false, + "notes": "Cleanup story for net module." }, { "id": "US-021", - "title": "Implement Node.js HostNetworkAdapter", - "description": "As a developer, I need a concrete HostNetworkAdapter implementation using node:net and node:dgram so the kernel can delegate external I/O.", + "title": "Load TLS fixture files into conformance test VFS", + "description": "As a developer, I need TLS test fixtures (certificates, keys) loaded into the VFS so TLS/HTTPS tests can find them.", "acceptanceCriteria": [ - "Add HostNetworkAdapter implementation in the Node.js driver package (packages/nodejs/ or packages/secure-exec/)", - "tcpConnect(host, port) creates real TCP connection via node:net and returns HostSocket", - "tcpListen(host, port) creates real TCP server and returns HostListener", - "udpBind(host, port) creates real UDP socket via node:dgram and returns HostUdpSocket", - "dnsLookup(hostname, rrtype) uses node:dns", - "HostSocket.write/read/close/setOption/shutdown delegate to real net.Socket", - "HostListener.accept/close/port delegate to real net.Server", + "The conformance runner's loadFixtureFiles() function loads all files from fixtures/ recursively into VFS at /test/fixtures/", + "TLS certificate files (*.pem, *.crt, *.key, *.pfx) are loaded as binary (Uint8Array)", + "Verify fixtures are accessible: a test reading /test/fixtures/keys/agent1-cert.pem gets valid PEM content", + "Verify common/fixtures.js path helper resolves correctly inside VFS", + "Run conformance for tls module: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts -t 'node/tls' \u2014 check if fixture-dependent tests now pass", + "Remove expectations.json entries for tests that now pass", + "Tests pass", "Typecheck passes" ], "priority": 21, - "passes": true, - "notes": "Concrete implementation of interfaces from US-003. Testing will be via integration tests with real sockets." + "passes": false, + "notes": "~169 TLS/HTTPS test failures are from missing fixture files. The runner already has loadFixtureFiles() but the fixtures directory may not have the upstream Node.js test fixtures vendored. Check if fixtures need to be copied from Node.js source." }, { "id": "US-022", - "title": "Migrate Node.js FD table to kernel (N-1)", - "description": "As a developer, I need the Node.js bridge to use the kernel FD table so file descriptors are shared across runtimes.", + "title": "Fix TLS API gaps (createSecureContext, context options, certificate validation)", + "description": "As a developer, I need core TLS APIs to work for conformance tests.", "acceptanceCriteria": [ - "Remove fdTable Map and nextFd counter from bridge/fs.ts", - "All fdTable.get(fd)/fdTable.set(fd) calls replaced with kernel.fdTable.open()/read()/close() etc.", - "Kernel ProcessFDTable is used for FD allocation", - "Existing fs tests still pass", + "tls.createSecureContext({ key, cert, ca }) returns valid context", + "tls.connect({ host, port, secureContext }) establishes TLS connection", + "tls.createServer({ key, cert }) creates TLS server", + "Server and client exchange data over TLS", + "Certificate validation: rejectUnauthorized option works", + "tls.getCiphers() returns array of supported cipher names", + "Run conformance for tls module \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", + "Tests pass", "Typecheck passes" ], "priority": 22, - "passes": true, - "notes": "See spec section 3.1. Wire bridge to existing kernel ProcessFDTable." + "passes": false, + "notes": "Core TLS API surface. TLS handshake happens in host adapter. Bridge needs to expose createSecureContext, connect, createServer." }, { "id": "US-023", - "title": "Migrate Node.js net.connect to kernel sockets (N-4)", - "description": "As a developer, I need net.connect() to route through kernel.socketTable.connect() so connections share the kernel socket lifecycle.", + "title": "Fix TLS SNI, ALPN, and session resumption", + "description": "As a developer, I need advanced TLS features for conformance tests.", "acceptanceCriteria": [ - "Remove activeNetSockets Map from bridge/network.ts", - "Remove netSockets Map from bridge-handlers.ts (if it exists)", - "net.connect() calls kernel.socketTable.create() then kernel.socketTable.connect()", - "Data flows through kernel socket send/recv", - "Socket close calls kernel.socketTable.close()", - "Existing net tests still pass", + "SNI: server.addContext(hostname, context) for virtual hosting", + "SNI: servername option in tls.connect() sends SNI extension", + "ALPN: ALPNProtocols option in server and client for protocol negotiation", + "Session resumption: tlsSocket.getSession() returns session buffer", + "Session resumption: session option in tls.connect() resumes previous session", + "tlsSocket.getPeerCertificate() returns certificate details object", + "Run conformance for tls module \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", + "Tests pass", "Typecheck passes" ], "priority": 23, - "passes": true, - "notes": "See spec section 3.3. Depends on socket table being wired to kernel (US-016) and host adapter (US-021)." + "passes": false, + "notes": "Advanced TLS features. SNI and ALPN go through host adapter TLS options. Session resumption needs session state caching." }, { "id": "US-024", - "title": "Migrate Node.js http.createServer to kernel sockets (N-2, N-3)", - "description": "As a developer, I need http.createServer() to use kernel.socketTable.listen() so loopback HTTP works without real TCP.", + "title": "Update tls/https expectations and regenerate conformance report", + "description": "As a developer, I need tls and https conformance expectations cleaned up after fixes.", "acceptanceCriteria": [ - "http.createServer().listen(port) calls kernel.socketTable.create() → bind() → listen()", - "For loopback: incoming connections from kernel connect() are kernel sockets", - "For external: kernel calls hostAdapter.tcpListen() for real TCP", - "Remove servers Map, ownedServerPorts Set from driver.ts", - "Remove serverRequestListeners Map from bridge/network.ts", - "HTTP protocol parsing stays in the bridge layer (not kernel)", - "Existing HTTP tests still pass", + "Run full tls conformance: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts -t 'node/tls'", + "Run full https conformance: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts -t 'node/https'", + "All genuinely passing tests have expectations removed", + "Vacuous self-skip tests (common.hasCrypto=false) properly categorized", + "Remaining failures have specific reasons", + "Regenerate report: pnpm tsx scripts/generate-node-conformance-report.ts", + "Tests pass", "Typecheck passes" ], "priority": 24, - "passes": true, - "notes": "See spec section 3.2. Highest ROI — unlocks 492 Node.js conformance tests (FIX-01)." + "passes": false, + "notes": "Cleanup story for tls and https modules. ~16 tls tests self-skip via common.hasCrypto \u2014 these are vacuous-skip." }, { "id": "US-025", - "title": "Migrate Node.js SSRF validation to kernel (N-11)", - "description": "As a developer, I need SSRF validation in the kernel so it applies to all runtimes uniformly.", + "title": "Implement dgram.Socket API bridge (createSocket, bind, send, close)", + "description": "As a developer, I need the dgram module bridged so UDP socket operations work in the sandbox.", "acceptanceCriteria": [ - "Remove SSRF validation logic from driver.ts NetworkAdapter", - "Remove ownedServerPorts whitelist from driver.ts", - "kernel.checkNetworkPermission() handles all SSRF checks", - "Loopback to kernel-owned ports is always allowed", - "External connections checked against kernel permission policy", - "Existing SSRF/permission tests still pass", + "dgram.createSocket('udp4') returns Socket instance", + "dgram.createSocket('udp6') returns Socket instance", + "socket.bind(port, address, callback) binds to local address via kernel UDP", + "socket.send(msg, port, address, callback) sends datagram", + "socket.close(callback) closes socket", + "socket.address() returns { address, family, port } after bind", + "'message' event fires with (msg, rinfo) on received datagram", + "'listening' event fires after successful bind", + "Run conformance: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts -t 'node/dgram' \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", + "Tests pass", "Typecheck passes" ], "priority": 25, - "passes": true, - "notes": "See spec section 3.5. Depends on kernel network permissions (US-011)." + "passes": false, + "notes": "Kernel UDP (SocketTable sendTo/recvFrom) already works. This story bridges the dgram Node.js API to kernel UDP sockets. ~30 tests depend on basic send/recv." }, { "id": "US-026", - "title": "Migrate Node.js child process registry to kernel (N-6)", - "description": "As a developer, I need child process tracking in the kernel process table so all runtimes share process state.", + "title": "Implement dgram multicast, broadcast, and socket options", + "description": "As a developer, I need dgram multicast and broadcast for conformance tests.", "acceptanceCriteria": [ - "Remove activeChildren Map from bridge/child-process.ts", - "Bridge calls kernel.processTable.register() on spawn", - "Bridge queries kernel.processTable.get() for child state/events", - "waitpid/kill route through kernel process table", - "Existing child process tests still pass", + "socket.setBroadcast(flag) enables/disables broadcast", + "socket.setMulticastTTL(ttl) sets multicast time-to-live", + "socket.setMulticastLoopback(flag) enables/disables loopback", + "socket.addMembership(multicastAddress, multicastInterface) joins multicast group", + "socket.dropMembership(multicastAddress, multicastInterface) leaves multicast group", + "socket.setTTL(ttl) sets unicast TTL", + "socket.setRecvBufferSize(size) and socket.setSendBufferSize(size) work", + "Run conformance for dgram module \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", + "Tests pass", "Typecheck passes" ], "priority": 26, - "passes": true, - "notes": "See spec section 3.4." + "passes": false, + "notes": "Multicast operations go through host adapter. setBroadcast/setMulticastTTL are socket options passed through kernel setsockopt." }, { "id": "US-027", - "title": "Route WasmVM socket create/connect through kernel", - "description": "As a developer, I need existing WasmVM TCP to route through the kernel socket table instead of the driver's private _sockets Map.", + "title": "Update dgram expectations and regenerate conformance report", + "description": "As a developer, I need dgram conformance expectations cleaned up after fixes.", "acceptanceCriteria": [ - "WasmVM driver.ts: remove _sockets Map and _nextSocketId counter", - "netSocket handler calls kernel.socketTable.create() instead of allocating local ID", - "netConnect handler calls kernel.socketTable.connect()", - "netSend handler calls kernel.socketTable.send()", - "netRecv handler calls kernel.socketTable.recv()", - "netClose handler calls kernel.socketTable.close()", - "kernel-worker.ts: localToKernelFd maps local WASM FDs to kernel socket FDs", - "Existing WasmVM network tests still pass", + "Run full dgram conformance: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts -t 'node/dgram'", + "All genuinely passing tests have expectations removed", + "Update test-dgram-*.js glob pattern with specific remaining gap reasons", + "Regenerate report: pnpm tsx scripts/generate-node-conformance-report.ts", + "Tests pass", "Typecheck passes" ], "priority": 27, - "passes": true, - "notes": "See spec section 4.2. Migrates existing working TCP to kernel routing." + "passes": false, + "notes": "Cleanup story for dgram module." }, { "id": "US-028", - "title": "Add bind/listen/accept WASI extensions for WasmVM server sockets", - "description": "As a developer, I need WASI extensions for server sockets so WasmVM programs can accept TCP connections.", + "title": "Implement http2 session and stream basics (connect, request, respond)", + "description": "As a developer, I need basic HTTP/2 client and server functionality.", "acceptanceCriteria": [ - "Add net_bind, net_listen, net_accept to host_net module in native/wasmvm/crates/wasi-ext/src/lib.rs", - "Add safe Rust wrappers following existing pattern (pub fn bind, listen, accept)", - "kernel-worker.ts: add net_bind, net_listen, net_accept import handlers that call kernel.socketTable", - "driver.ts: add kernelSocketBind, kernelSocketListen, kernelSocketAccept RPC handlers", + "http2.createServer(options) creates HTTP/2 server", + "http2.createSecureServer(options) creates HTTP/2 server over TLS", + "http2.connect(authority) creates client session", + "session.request(headers) creates client stream", + "Server 'stream' event fires with (stream, headers) for incoming requests", + "stream.respond(headers) sends response headers", + "stream.end(data) sends response body and closes stream", + "Run conformance: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts -t 'node/http2' \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", + "Tests pass", "Typecheck passes" ], "priority": 28, - "passes": true, - "notes": "See spec sections 4.3 and 4.5. Rust WASI extensions + JS kernel worker handlers." + "passes": false, + "notes": "HTTP/2 is a major subsystem. This story covers basic request/response lifecycle. h2 framing goes through host adapter's http2 module." }, { "id": "US-029", - "title": "Add C sysroot patches for bind/listen/accept", - "description": "As a developer, I need C libc implementations of bind(), listen(), accept() that call the WASI host imports.", + "title": "Implement http2 server push, settings, and GOAWAY", + "description": "As a developer, I need HTTP/2 server push, settings negotiation, and connection management.", "acceptanceCriteria": [ - "Extend 0008-sockets.patch or create new patch with bind(), listen(), accept() in host_socket.c", - "bind() serializes sockaddr and calls __host_net_bind", - "listen() calls __host_net_listen", - "accept() calls __host_net_accept, maps returned FD, deserializes remote address", - "Patch applies cleanly on wasi-libc", + "stream.pushStream(headers, callback) creates push promise stream", + "session.settings(settings) updates HTTP/2 settings", + "session.localSettings and session.remoteSettings return current settings", + "session.goaway(code) sends GOAWAY frame and closes session", + "session.close() gracefully closes the session", + "session.destroy() immediately destroys the session", + "'goaway' event fires when peer sends GOAWAY", + "Run conformance for http2 module \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", + "Tests pass", "Typecheck passes" ], "priority": 29, - "passes": true, - "notes": "See spec section 4.4 (server socket C code). Builds on existing 0008-sockets.patch pattern." + "passes": false, + "notes": "HTTP/2 advanced features. Server push, settings frames, and GOAWAY connection management." }, { "id": "US-030", - "title": "Add sendto/recvfrom WASI extensions for WasmVM UDP", - "description": "As a developer, I need WASI extensions for UDP so WasmVM programs can send/receive datagrams.", + "title": "Implement http2 flow control, errors, and compatibility mode", + "description": "As a developer, I need HTTP/2 flow control, error handling, and HTTP/1 compatibility.", "acceptanceCriteria": [ - "Add net_sendto, net_recvfrom to host_net module in lib.rs", - "Add safe Rust wrappers", - "kernel-worker.ts: add net_sendto, net_recvfrom import handlers routing through kernel.socketTable", - "driver.ts: add kernelSocketSendTo, kernelSocketRecvFrom RPC handlers", + "Stream flow control: window size updates, backpressure via 'drain' event", + "RST_STREAM: stream.close(code) resets individual streams", + "Error codes: NGHTTP2_* constants available", + "http2.createServer({ allowHTTP1: true }) handles HTTP/1 connections on same port", + "Compatibility API: req/res objects in 'request' event match http.IncomingMessage/ServerResponse", + "Run conformance for http2 module \u2014 check newly passing tests", + "Remove expectations.json entries for tests that now pass", + "Tests pass", "Typecheck passes" ], "priority": 30, - "passes": true, - "notes": "See spec sections 4.3 and 4.5. UDP extensions for WasmVM." + "passes": false, + "notes": "HTTP/2 flow control and compatibility. allowHTTP1 mode lets server handle both h1 and h2. 4 tests already pass for compatibility mode." }, { "id": "US-031", - "title": "Add C sysroot patches for sendto/recvfrom and AF_UNIX", - "description": "As a developer, I need C libc sendto(), recvfrom() implementations and AF_UNIX support in sockaddr serialization.", + "title": "Update http2 expectations and regenerate final conformance report", + "description": "As a developer, I need http2 expectations cleaned up and a final comprehensive report generated.", "acceptanceCriteria": [ - "Add sendto() to host_socket.c patch — serializes dest addr, calls __host_net_sendto", - "Add recvfrom() to host_socket.c patch — calls __host_net_recvfrom, deserializes src addr", - "Add AF_UNIX support in sockaddr_to_string() / string_to_sockaddr() — handles struct sockaddr_un", - "Patch applies cleanly", + "Run full http2 conformance: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts -t 'node/http2'", + "All genuinely passing tests have expectations removed", + "Update test-http2-*.js glob pattern with specific remaining gap reasons", + "Run FULL conformance suite (all modules): pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts", + "Regenerate report: pnpm tsx scripts/generate-node-conformance-report.ts", + "Commit updated conformance-report.json and nodejs-conformance-report.mdx", + "Tests pass", "Typecheck passes" ], "priority": 31, - "passes": true, - "notes": "See spec section 4.4 (UDP and AF_UNIX C code)." + "passes": false, + "notes": "Final cleanup story. Run entire conformance suite, update all expectations, regenerate both JSON and docs reports." }, { "id": "US-032", - "title": "Add WasmVM server socket C test program and test", - "description": "As a developer, I need a C test program that exercises bind→listen→accept→recv→send→close through the WasmVM.", + "title": "Fix cross-runtime kernel network integration regressions", + "description": "As a developer, I need the Node.js \u2194 WasmVM kernel networking path to work end-to-end so the cross-runtime proof story is actually true.", "acceptanceCriteria": [ - "Add native/wasmvm/c/programs/tcp_server.c that: socket() → bind(port) → listen() → accept() → recv() → send('pong') → close()", - "Add tcp_server to PATCHED_PROGRAMS in Makefile", - "Add packages/wasmvm/test/net-server.test.ts that: spawns tcp_server as WASM, connects from kernel as client, verifies data exchange", + "Fix packages/secure-exec/tests/kernel/cross-runtime-network.test.ts so all scenarios pass locally and in CI", + "WasmVM tcp_server \u2194 Node.js net.connect exchanges data through kernel loopback and the Node side observes the reply", + "Node.js http.createServer \u2194 WasmVM http_get registers a listener in kernel.socketTable and serves the response through the kernel path", + "Do not weaken or delete the failing assertions in cross-runtime-network.test.ts to make the story pass", + "Run: pnpm exec vitest run packages/secure-exec/tests/kernel/cross-runtime-network.test.ts", "Tests pass", "Typecheck passes" ], "priority": 32, - "passes": true, - "notes": "See spec section 4.9. Integration test for WasmVM server sockets." + "passes": false, + "notes": "US-036 from the previous PRD was marked done, but the signature cross-runtime proof test is currently red." }, { "id": "US-033", - "title": "Add WasmVM UDP C test program and test", - "description": "As a developer, I need a C test program that exercises UDP send/recv through the WasmVM.", + "title": "Enforce deny-by-default kernel network permissions", + "description": "As a developer, I need kernel socket operations to fail closed when no network policy is configured.", "acceptanceCriteria": [ - "Add native/wasmvm/c/programs/udp_echo.c that: socket(SOCK_DGRAM) → bind() → recvfrom() → sendto() (echo server)", - "Add udp_echo to PATCHED_PROGRAMS in Makefile", - "Add packages/wasmvm/test/net-udp.test.ts that: spawns udp_echo as WASM, sends datagram, verifies echo response, verifies message boundaries", + "KernelImpl wires a network permission callback into SocketTable when the kernel is created", + "SocketTable.connect/listen/send/sendTo/externalListen deny by default when no allow rule is configured", + "Loopback routing to kernel-owned listeners remains allowed without requiring a host-network allow rule", + "packages/core/test/kernel/network-permissions.test.ts no longer treats 'no policy = no enforcement' as correct behavior", + "Add or update tests covering deny-by-default behavior at the kernel level, not only on standalone SocketTable instances", "Tests pass", "Typecheck passes" ], "priority": 33, - "passes": true, - "notes": "See spec section 4.9." + "passes": false, + "notes": "Current code documents deny-by-default in SocketTable but only enforces when networkCheck exists, and KernelImpl does not provide one." }, { "id": "US-034", - "title": "Add WasmVM Unix domain socket C test program and test", - "description": "As a developer, I need a C test program that exercises AF_UNIX sockets through the WasmVM.", + "title": "Eliminate remaining Node bridge networking bypasses", + "description": "As a developer, I need Node.js networking to use the kernel path consistently instead of shortcutting around it.", "acceptanceCriteria": [ - "Add native/wasmvm/c/programs/unix_socket.c that: socket(AF_UNIX) → bind('/tmp/test.sock') → listen() → accept() → recv/send", - "Add unix_socket to PATCHED_PROGRAMS in Makefile", - "Add packages/wasmvm/test/net-unix.test.ts that: spawns unix_socket WASM, connects from kernel, verifies data exchange", + "Loopback http.request/https.request no longer dispatch directly to an in-process server listener without opening a kernel socket", + "Bridge networkFetchRaw and networkHttpRequestRaw do not bypass kernel routing/policy with a direct adapter call for the kernel-backed path", + "SSRF and network policy checks for the kernel-backed path have a single source of truth in kernel-mediated permission enforcement", + "Any retained fallback path is explicitly documented as legacy and excluded from kernel-consolidation completion claims", + "Add or update tests proving loopback HTTP traffic exercises kernel socket routing rather than direct same-process dispatch", "Tests pass", "Typecheck passes" ], "priority": 34, - "passes": true, - "notes": "See spec section 4.9." + "passes": false, + "notes": "Current loopback HTTP client requests short-circuit in bridge/network.ts, and external fetch/httpRequest still go straight through the adapter." }, { "id": "US-035", - "title": "Add WasmVM signal handler WASI extension and C test", - "description": "As a developer, I need sigaction() support in WasmVM so WASM programs can register signal handlers.", + "title": "Respect omitted network capability in standalone Node runtime", + "description": "As a developer, I need omitted network adapters to keep networking unavailable instead of silently provisioning host access.", "acceptanceCriteria": [ - "Add net_sigaction WASI extension to lib.rs (registers handler function pointer + mask + flags)", - "kernel-worker.ts: store handler pointer in kernel process table on sigaction call", - "Signal delivery at syscall boundary: check pendingSignals bitmask, invoke WASM trampoline", - "Add __wasi_signal_trampoline export in C sysroot patch", - "Add native/wasmvm/c/programs/signal_handler.c: sigaction(SIGINT, handler) → busy loop → verify handler called", - "Add packages/wasmvm/test/signal-handler.test.ts: spawn signal_handler, deliver SIGINT via kernel, verify handler fires", + "If SystemDriver.network is omitted, standalone NodeRuntime does not automatically provision host network access for net/http/socket operations", + "Any internal SocketTable provisioning for standalone mode respects capability omission and fails unavailable/denied by contract", + "Add regression tests covering omitted network adapter behavior for net.connect and http/fetch client operations", + "Update relevant contracts/docs if the intended capability model changes", "Tests pass", "Typecheck passes" ], "priority": 35, - "passes": true, - "notes": "See spec sections 4.8 and 4.9. Cooperative delivery at syscall boundaries." + "passes": false, + "notes": "NodeExecutionDriver currently provisions createNodeHostNetworkAdapter() when no socket table is injected." }, { "id": "US-036", - "title": "Add cross-runtime network integration test", - "description": "As a developer, I need to verify that WasmVM and Node.js can communicate via kernel sockets.", + "title": "Replace regex-based Node loader source transforms", + "description": "As a developer, I need Node module loading to stop relying on regex rewrites that violate repo policy and can drift from real semantics.", "acceptanceCriteria": [ - "Add packages/secure-exec/tests/kernel/cross-runtime-network.test.ts (or packages/core/test/kernel/)", - "Test: WasmVM tcp_server on port 9090, Node.js net.connect(9090) — verify data exchange", - "Test: Node.js http.createServer on port 8080, WasmVM curl-like client connects — verify response", - "Verify loopback: neither connection touches the host network stack", + "Remove regex-based ESM detection/dynamic import rewriting from the active Node loader path", + "Do not use convertEsmToCjs/isESM/transformDynamicImport-style regex transforms in the bridge loading path for JavaScript/TypeScript source", + "Use parser-backed or engine-backed module classification and loading consistent with the repo's code transformation policy", + "Add regression coverage for tricky import/export cases that regex transforms previously mishandled", "Tests pass", "Typecheck passes" ], "priority": 36, - "passes": true, - "notes": "See spec Part 6 cross-runtime integration test. The signature test that kernel consolidation works." + "passes": false, + "notes": "US-056 improved targeted parity, but the implementation still depends on regex-based source transforms in active loader paths." }, { "id": "US-037", - "title": "Run Node.js conformance suite and update expectations for HTTP server tests", - "description": "As a developer, I need to re-run the 492 FIX-01 HTTP server tests and reclassify ones that now pass.", + "title": "Implement full WasmVM sigaction semantics", + "description": "As a developer, I need the WasmVM signal bridge to carry real sigaction semantics instead of a reduced signal()/default/ignore shim.", "acceptanceCriteria": [ - "Run packages/secure-exec/tests/node-conformance/runner.test.ts for FIX-01 tests", - "Remove expectations.json entries for tests that now genuinely pass", - "Update remaining entries with specific failure reasons (not vague 'fails in sandbox')", - "Update docs-internal/nodejs-compat-roadmap.md pass counts", + "The Wasm host import for signal registration carries enough information to represent handler disposition, mask, and flags", + "Kernel worker and driver preserve sigaction mask/flags instead of hardcoding empty mask and zero flags", + "WasmVM user code can exercise SA_RESTART and SA_RESETHAND semantics through the libc-facing signal API", + "The wasi-libc patch exposes the required sigaction behavior rather than only a signal() wrapper", + "Add or update WasmVM tests covering sigaction flags and masked delivery behavior", "Tests pass", "Typecheck passes" ], "priority": 37, - "passes": true, - "notes": "See spec section 7.3. This is the conformance payoff from the kernel consolidation." + "passes": false, + "notes": "Current WasmVM signal support is cooperative and minimal; it does not faithfully carry sigaction masks/flags through the stack." }, { "id": "US-038", - "title": "Run Node.js conformance suite and update expectations for dgram/net/tls tests", - "description": "As a developer, I need to re-run dgram, net, tls, https, http2 tests and reclassify from unsupported-module to specific reasons.", + "title": "Validate socket ownership against the kernel process table", + "description": "As a developer, I need socket ownership to be tied to real kernel processes so process cleanup and isolation rules are meaningful.", "acceptanceCriteria": [ - "Re-run all 76 dgram tests — remove expectations for tests that now pass", - "Re-run https/tls/net glob tests — reclassify from unsupported-module to specific failure reasons", - "Update docs-internal/nodejs-compat-roadmap.md with new pass counts", + "SocketTable.create validates that the owner pid exists when used in a kernel-mediated environment", + "Kernel-exposed socket creation paths do not allow allocating sockets for nonexistent processes", + "Tests no longer rely on creating sockets for arbitrary fake PIDs and treating that as correct behavior", + "Add regression coverage for invalid PID ownership errors and normal process-exit socket cleanup", "Tests pass", "Typecheck passes" ], "priority": 38, - "passes": true, - "notes": "See spec section 7.3. Reclassify stale glob categorizations." + "passes": false, + "notes": "Current integration tests intentionally create sockets for nonexistent PID 99999, which undermines the ownership semantics claimed in the prior PRD." }, { "id": "US-039", - "title": "Proofing: adversarial review of kernel implementation completeness", - "description": "As a developer, I need a full audit verifying no networking code bypasses the kernel in either runtime.", + "title": "Correct vacuous pass accounting in Node conformance expectations", + "description": "As a developer, I need self-skipping conformance tests to stop inflating the genuine pass count.", "acceptanceCriteria": [ - "Verify: packages/nodejs driver.ts has no servers Map, ownedServerPorts Set, netSockets Map, upgradeSockets Map", - "Verify: packages/nodejs bridge/network.ts has no serverRequestListeners Map, activeNetSockets Map", - "Verify: packages/wasmvm driver.ts has no _sockets Map, _nextSocketId counter", - "Verify: all http.createServer() routes through kernel.socketTable.listen()", - "Verify: all net.connect() routes through kernel.socketTable.connect()", - "Verify: SSRF validation is only in kernel, not in host adapter", - "Document any remaining gaps as new stories if found", + "Node conformance expectations that pass only via self-skip are categorized as vacuous-skip instead of genuine pass", + "The runner summary does not count vacuous self-skips as genuine passing tests", + "Audit existing expectations.json entries for reasons like 'passes via common.hasCrypto skip path' and classify them correctly", + "Regenerate any derived conformance counts/docs affected by the classification change", + "Tests pass", "Typecheck passes" ], "priority": 39, - "passes": true, - "notes": "See spec section 7.1. This is the final proofing pass." + "passes": false, + "notes": "Current runner logic only special-cases category=vacuous-skip, but several pass entries already admit they only pass via self-skip." }, { "id": "US-040", - "title": "Remove legacy networking Maps from Node.js driver and bridge", - "description": "As a developer, I need to complete the legacy code removal that US-023/024/025 deferred so all networking routes exclusively through the kernel.", + "title": "Add hard CI guards for story-critical Wasm C artifacts", + "description": "As a developer, I need CI to fail when required Wasm C binaries for story verification are missing instead of silently skipping the suites.", "acceptanceCriteria": [ - "Remove `servers` Map (line ~294) from packages/nodejs/src/driver.ts and all references to it (httpServerListen, httpServerClose handlers)", - "Remove `ownedServerPorts` Set (line ~296) from driver.ts and all references (fetch, httpRequest SSRF checks)", - "Remove `upgradeSockets` Map (line ~298) from driver.ts and all references (upgrade handlers)", - "Remove `activeNetSockets` Map (line ~2042) from packages/nodejs/src/bridge/network.ts and all references (dispatch routing, connect)", - "All HTTP server operations route through kernel.socketTable — verify with grep: no direct net.Server or http.Server creation in driver.ts outside of HostNetworkAdapter", - "All net.connect operations route through kernel.socketTable — verify with grep: no direct net.Socket creation in bridge/network.ts outside of HostNetworkAdapter", - "SSRF validation uses only kernel.checkNetworkPermission, not ownedServerPorts", - "Existing tests pass: run `pnpm vitest run packages/secure-exec/tests/test-suite/node.test.ts` and `pnpm vitest run packages/secure-exec/tests/runtime-driver/`", + "CI has explicit hard-fail coverage for the C-built Wasm binaries required by net-server, net-udp, net-unix, signal-handler, and cross-runtime-network tests", + "Story-critical Wasm C integration suites do not rely solely on describe.skipIf(skipReason()) for missing artifacts in CI", + "Document the required build step or guard location for future contributors", "Tests pass", "Typecheck passes" ], "priority": 40, - "passes": true, - "notes": "Addresses review finding H-1. US-024 added kernel socket path alongside legacy adapter path but never removed the legacy path. US-039 audit rationalized this as 'fallback' — it must be removed now. Read docs-internal/reviews/kernel-consolidation-prd-review.md for context." + "passes": false, + "notes": "Standalone command binaries already have a CI guard, but the C-built story artifacts still disappear behind skip guards when missing." }, { "id": "US-041", - "title": "Fix CI crossterm build and verify WASM test programs compile and run", - "description": "As a developer, I need CI to pass on this branch so WASM binaries are built and skip-guarded tests actually execute.", + "title": "Strengthen kernel networking tests against overfitting", + "description": "As a developer, I need network verification that catches shared bugs instead of validating only mocked or same-code paths.", "acceptanceCriteria": [ - "Identify and fix the crossterm crate compilation failure for wasm32-wasip1 (likely needs feature gate or dependency exclusion in native/wasmvm/crates/)", - "Run `cd native/wasmvm && make wasm` locally — all WASM command binaries build successfully in target/wasm32-wasip1/release/commands/", - "Run `cd native/wasmvm/c && make` — all PATCHED_PROGRAMS (including tcp_server, udp_echo, unix_socket, signal_handler) compile to c/build/", - "Run `pnpm vitest run packages/wasmvm/test/net-server.test.ts` — tests execute (not skipped) and pass", - "Run `pnpm vitest run packages/wasmvm/test/net-udp.test.ts` — tests execute (not skipped) and pass", - "Run `pnpm vitest run packages/wasmvm/test/net-unix.test.ts` — tests execute (not skipped) and pass", - "Run `pnpm vitest run packages/wasmvm/test/signal-handler.test.ts` — tests execute (not skipped) and pass", - "If any C sysroot patch (0008-sockets.patch, 0011-sigaction.patch) fails to apply, fix the patch hunks", + "Add real-server control tests for network features where the client side is validated independently of the server side", + "Add wire-level or protocol-level verification for loopback HTTP/TLS behavior where both ends currently go through project code", + "Add project-matrix or equivalent black-box coverage for kernel-backed HTTP server/client behavior using a real package where practical", + "Do not rely solely on MockHostSocket/MockHostListener unit tests for external routing stories", + "Update docs-internal/nodejs-compat-roadmap.md or related verification docs if the required mitigations change", "Tests pass", "Typecheck passes" ], "priority": 41, - "passes": true, - "notes": "Addresses review findings H-2, H-3, S-1. The C programs and patches were committed by US-029/031/032-035 but never compiled or tested because WASM binaries were never built. This story requires the Rust toolchain (rustup will install from rust-toolchain.toml) and wasm-opt/binaryen." + "passes": false, + "notes": "Current lower-level tests are heavily mock-based, and a same-repo loopback HTTP bridge test passing did not prevent the end-to-end cross-runtime failure." }, { "id": "US-042", - "title": "Wire kernel TimerTable and handle tracking to Node.js bridge", - "description": "As a developer, I need the Node.js bridge to use kernel timer and handle tracking so resource budgets are kernel-enforced.", + "title": "Retarget kernel-consolidation verification suites away from legacy adapter paths", + "description": "As a developer, I need the verification suites for kernel-consolidation work to exercise the kernel-backed runtime path instead of the legacy default-network adapter path.", "acceptanceCriteria": [ - "KernelImpl constructor creates a TimerTable instance and exposes it as kernel.timerTable", - "In packages/nodejs/src/bridge/process.ts: replace bridge-local `_timerId` counter (line ~975) and `_timers`/`_intervals` Maps (lines ~976-977) with calls to kernel.timerTable.createTimer() and kernel.timerTable.clearTimer()", - "In packages/nodejs/src/bridge/active-handles.ts: replace bridge-local `_activeHandles` Map (line ~18) with calls to kernel processTable.registerHandle()/unregisterHandle()", - "Timer budget enforcement works: setting a timer limit on the kernel causes excess setTimeout calls to throw", - "Handle budget enforcement works: setting a handle limit causes excess handle registrations to throw", - "Process exit cleans up all timers and handles for that process via kernel", - "Existing timer tests pass: run `pnpm vitest run packages/secure-exec/tests/test-suite/node.test.ts`", + "Identify tests that were used as evidence for kernel-consolidation stories but still instantiate createDefaultNetworkAdapter/useDefaultNetwork legacy paths", + "Add or migrate coverage so kernel-consolidation completion claims are backed by createNodeRuntime/kernel-mounted execution where appropriate", + "Preserve legacy adapter-path tests as compatibility coverage, but do not use them as proof that kernel migration work is complete", + "Document the distinction between legacy adapter-path coverage and kernel-backed coverage in the relevant roadmap/review docs", "Tests pass", "Typecheck passes" ], "priority": 42, - "passes": true, - "notes": "Addresses review finding H-12. US-017 created TimerTable and US-018 added handle tracking to ProcessTable, but neither was wired to the Node.js bridge. The bridge still uses bridge-local Maps. This story connects the kernel infrastructure to the runtime." + "passes": false, + "notes": "Several existing Node suites still validate the standalone default-network path, which inflated confidence in the kernel migration stories." }, { "id": "US-043", - "title": "Route WasmVM setsockopt through kernel instead of ENOSYS", - "description": "As a developer, I need WasmVM setsockopt to route through the kernel SocketTable so socket options actually work for WASM programs.", + "title": "Implement VFS correctness primitives for POSIX file semantics", + "description": "As a developer, I need correct inode and directory semantics in the in-memory VFS so the kernel matches POSIX behavior for file creation, truncation, links, and readdir.", "acceptanceCriteria": [ - "In packages/wasmvm/src/kernel-worker.ts: replace the ENOSYS stub at line ~984-987 in net_setsockopt with a call that routes through RPC to the kernel", - "In packages/wasmvm/src/driver.ts: add a kernelSocketSetopt RPC handler that calls kernel.socketTable.setsockopt(socketId, level, optname, optval)", - "Add getsockopt support similarly: kernel-worker net_getsockopt routes through RPC to kernel.socketTable.getsockopt()", - "Add test to packages/wasmvm/test/net-socket.test.ts: WASM program calls setsockopt(SO_REUSEADDR) and it succeeds (no ENOSYS)", + "Implement O_EXCL in kernel fdOpen so O_CREAT|O_EXCL returns EEXIST when the target already exists", + "Implement O_TRUNC in kernel fdOpen so opening an existing file with O_TRUNC truncates it to zero bytes", + "Add a monotonic inode allocator so each new file and directory gets a unique ino value instead of 0", + "Track nlink correctly for regular files, directories, and hard links", + "readdir/listDirEntries synthesizes '.' and '..' entries with correct parent/self behavior", + "Add kernel integration tests that exercise these behaviors through the real kernel and VFS, not mocks", + "Run targeted tests for packages/core kernel/VFS coverage", "Tests pass", "Typecheck passes" ], "priority": 43, - "passes": true, - "notes": "Addresses review finding M-10. kernel-worker.ts line 984 currently hardcodes `return ENOSYS` for net_setsockopt. The kernel SocketTable already has a working setsockopt() implementation at socket-table.ts line ~464." + "passes": false, + "notes": "Reference: docs-internal/posix-gaps-audit.md. Primary files: packages/core/src/kernel/kernel.ts and packages/core/src/shared/in-memory-fs.ts." }, { "id": "US-044", - "title": "Implement SA_RESTART syscall restart logic", - "description": "As a developer, I need blocking syscalls to restart after a signal handler returns when SA_RESTART is set, matching POSIX behavior.", + "title": "Add unified blocking I/O wait primitives to the JS kernel", + "description": "As a developer, I need blocking kernel operations to suspend and wake correctly instead of returning EAGAIN for blocking callers.", "acceptanceCriteria": [ - "In packages/core/src/kernel/socket-table.ts: recv() and accept() check for pending signals during blocking waits", - "When a signal interrupts a blocking recv/accept and the handler has SA_RESTART: the syscall transparently restarts (re-enters the wait loop)", - "When a signal interrupts a blocking recv/accept and the handler does NOT have SA_RESTART: the syscall returns EINTR error", - "Add tests to packages/core/test/kernel/signal-handlers.test.ts: (1) SA_RESTART recv restarts after signal, (2) no SA_RESTART recv returns EINTR, (3) SA_RESTART accept restarts after signal", + "Add a reusable wait/wake mechanism with timeout support for kernel blocking operations", + "Blocking pipe write in pipe-manager.ts suspends the writer when the buffer is full and wakes when readers drain data", + "Blocking flock in file-lock.ts waits for lock release when LOCK_NB is not set and preserves non-blocking EAGAIN semantics for LOCK_NB", + "poll timeout -1 blocks indefinitely until an event occurs instead of hardcapping to 30 seconds", + "The implementation is exercised through real kernel integration tests rather than mocks", + "Add tests for blocking pipe writes, blocking flock, and indefinite poll wake-up behavior", "Tests pass", "Typecheck passes" ], "priority": 44, - "passes": true, - "notes": "Addresses review finding H-4. US-020 defined SA_RESTART constant (0x10000000) and stores it on signal handlers, but no blocking syscall checks it. EINTR error code was added to KernelErrorCode 'for future SA_RESTART integration' — this story does that integration." + "passes": false, + "notes": "Reference: docs-internal/posix-gaps-audit.md. Primary files: packages/core/src/kernel/pipe-manager.ts, packages/core/src/kernel/file-lock.ts, and packages/core/src/drivers/." }, { "id": "US-045", - "title": "Implement O_NONBLOCK enforcement in socket operations", - "description": "As a developer, I need socket operations to respect the nonBlocking flag so non-blocking I/O works correctly.", + "title": "Implement deferred unlink with inode-backed open-file lifetime", + "description": "As a developer, I need unlinked files with open descriptors to remain accessible until the last FD closes.", "acceptanceCriteria": [ - "In socket-table.ts: recv() on a socket with nonBlocking=true returns EAGAIN immediately when readBuffer is empty (instead of waiting)", - "In socket-table.ts: accept() on a socket with nonBlocking=true returns EAGAIN immediately when backlog is empty", - "In socket-table.ts: connect() on a socket with nonBlocking=true to an external address returns EINPROGRESS", - "Add setsockopt or fcntl-style method to toggle nonBlocking flag on an existing socket", - "Add tests to packages/core/test/kernel/socket-flags.test.ts: (1) nonBlocking recv returns EAGAIN, (2) nonBlocking accept returns EAGAIN, (3) toggle nonBlocking via setsockopt/fcntl", + "removeFile removes the directory entry immediately but preserves file data while open FD references remain", + "Kernel FD open/close paths maintain inode-backed open reference counts", + "The final close of an unlinked file releases the underlying inode data", + "Path lookups fail immediately after unlink while existing file descriptors continue to work", + "Add integration tests covering unlink-with-open-FD, access after unlink via FD, and deletion after final close", "Tests pass", "Typecheck passes" ], "priority": 45, - "passes": true, - "notes": "Addresses review finding M-7. The nonBlocking field exists on KernelSocket (line ~116) and is initialized to false (line ~189) but is never read by recv/accept/connect. Spec section 4.7 describes the expected O_NONBLOCK behavior." + "passes": false, + "notes": "This depends on inode allocation/refcount work from the VFS correctness story." }, { "id": "US-046", - "title": "Implement backlog limit and loopback port 0 ephemeral assignment", - "description": "As a developer, I need listen() to enforce backlog limits and bind() to support port 0 for loopback sockets.", + "title": "Implement signal handler registry and signal masking in the JS kernel", + "description": "As a developer, I need sigaction and sigprocmask semantics so processes can catch, defer, and restart around signals.", "acceptanceCriteria": [ - "In socket-table.ts listen(): use the backlogSize parameter (currently prefixed with _ and unused at line ~297) to cap the backlog array length", - "When backlog is full, new loopback connections get ECONNREFUSED", - "In socket-table.ts bind(): when port is 0, assign an ephemeral port from range 49152-65535 that is not already in the listeners map", - "After ephemeral port assignment, socket.localAddr.port reflects the assigned port (not 0)", - "Add tests to packages/core/test/kernel/socket-table.test.ts: (1) listen with backlog=2, connect 3 times, 3rd gets ECONNREFUSED, (2) bind port 0 assigns ephemeral port, (3) two bind port 0 get different ports", + "Add a per-process signal handler table supporting registered handlers, SIG_IGN, and SIG_DFL", + "Implement sigaction(signal, handler, flags) with support for SA_RESTART and SA_RESETHAND", + "Implement per-process signal masks via sigprocmask(how, set)", + "Blocked signals are queued in a pending signal queue and delivered when unmasked", + "When a handler is registered, signal delivery invokes the handler instead of the default action", + "Add real kernel integration tests for handler registration, masking/unmasking, pending delivery, SA_RESTART, and SA_RESETHAND", "Tests pass", "Typecheck passes" ], "priority": 46, - "passes": true, - "notes": "Addresses review findings M-9 (backlog overflow) and M-8 (port 0). Both are small changes in socket-table.ts combined into one story." + "passes": false, + "notes": "Build on existing kernel signal delivery in packages/core/src/kernel/kernel.ts and process-table logic." }, { "id": "US-047", - "title": "Add getLocalAddr/getRemoteAddr methods and WasmVM getsockname/getpeername", - "description": "As a developer, I need formal SocketTable accessor methods and WasmVM WASI extensions so C programs can call getsockname()/getpeername().", + "title": "Build TCP server socket lifecycle in the JS kernel", + "description": "As a developer, I need bind/listen/accept support so the kernel can host passive TCP listeners with proper backlog behavior.", "acceptanceCriteria": [ - "Add SocketTable.getLocalAddr(socketId): SockAddr method that returns socket.localAddr (throws EBADF if socket doesn't exist)", - "Add SocketTable.getRemoteAddr(socketId): SockAddr method that returns socket.remoteAddr (throws ENOTCONN if not connected)", - "Add net_getsockname and net_getpeername to host_net module in native/wasmvm/crates/wasi-ext/src/lib.rs", - "Add safe Rust wrappers following existing pattern", - "kernel-worker.ts: add net_getsockname and net_getpeername import handlers that call kernel.socketTable.getLocalAddr/getRemoteAddr via RPC", - "driver.ts: add kernelSocketGetLocalAddr and kernelSocketGetRemoteAddr RPC handlers", - "Add C implementations in sysroot patch: getsockname() calls __host_net_getsockname, getpeername() calls __host_net_getpeername", - "Add test: kernel socket after connect has correct localAddr and remoteAddr", + "Implement bind(fd, addr, port) for server sockets", + "Implement listen(fd, backlog) and a pending connection queue with enforced backlog limit", + "Implement accept(fd) returning a new connected socket FD", + "Listening sockets report readable in poll when connections are pending", + "Driver-side integration uses Node.js net.createServer() for host-backed TCP listeners where required", + "Add kernel integration tests exercising bind/listen/accept/poll through the real kernel path", "Tests pass", "Typecheck passes" ], "priority": 47, - "passes": true, - "notes": "Addresses review finding H-9. Data is already accessible via socketTable.get(id).localAddr but formal methods and WasmVM WASI extensions are missing. Follows existing WASI extension pattern: Rust extern → kernel-worker handler → driver RPC." + "passes": false, + "notes": "Reference: docs-internal/posix-gaps-audit.md. This is the main server-socket foundation for POSIX networking." }, { "id": "US-048", - "title": "Wire InodeTable into VFS for deferred unlink and real nlink/ino", - "description": "As a developer, I need the InodeTable integrated into the VFS so stat() returns real inode numbers, hard links work, and unlinked-but-open files persist until last FD closes.", + "title": "Implement AF_UNIX stream and datagram sockets in-kernel", + "description": "As a developer, I need Unix domain sockets for local IPC patterns like docker.sock, ssh-agent, and socket activation.", "acceptanceCriteria": [ - "KernelImpl constructor creates an InodeTable instance and exposes it as kernel.inodeTable", - "In packages/core/src/shared/in-memory-fs.ts: each file/directory gets an inode via inodeTable.allocate() on creation", - "stat() returns the inode's ino number instead of a hash or 0", - "stat() returns the inode's nlink count instead of hardcoded 1", - "In in-memory-fs.ts removeFile(): when file has open FDs (openRefCount > 0), remove directory entry but keep data — file disappears from listings but stays readable via open FDs", - "When last FD to an unlinked file closes (decrementOpenRefs → shouldDelete=true), data is deleted", - "fdOpen() calls inodeTable.incrementOpenRefs(ino), fdClose() calls inodeTable.decrementOpenRefs(ino)", - "Add tests to packages/core/test/kernel/inode-table.test.ts: (1) stat returns real ino, (2) unlink with open FD keeps data, (3) close last FD deletes data, (4) nlink increments on hard link", + "Implement AF_UNIX namespace registration keyed by VFS path", + "Support bind/connect/listen/accept for AF_UNIX SOCK_STREAM sockets", + "Support AF_UNIX SOCK_DGRAM semantics with path-based addressing", + "Implement socketpair() for anonymous connected Unix socket pairs", + "Implement shutdown() half-close behavior for AF_UNIX streams", + "Binding a Unix socket path creates the appropriate VFS entry and cleanup behavior", + "Add kernel integration tests covering stream, datagram, socketpair, connect-by-path, and shutdown behavior through the real kernel", "Tests pass", "Typecheck passes" ], "priority": 48, - "passes": true, - "notes": "InodeTable was created by US-002 with full allocate/incrementLinks/decrementLinks/shouldDelete logic but was never wired into the kernel or VFS. in-memory-fs.ts removeFile() at line ~201 immediately deletes with no refcounting. stat() returns hardcoded nlink:1 at line ~152." + "passes": false, + "notes": "AF_UNIX is entirely in-kernel and should not depend on host networking." }, { "id": "US-049", - "title": "Add '.' and '..' entries to readdir", - "description": "As a developer, I need readdir to include '.' and '..' entries to match POSIX behavior.", + "title": "Add UDP transport support to the JS kernel", + "description": "As a developer, I need connectionless datagram transport with source-address reporting.", "acceptanceCriteria": [ - "In packages/core/src/shared/in-memory-fs.ts listDirEntries(): prepend '.' (self) and '..' (parent) to the entry list before returning real entries", - "'.' entry has the directory's own inode number (if InodeTable is wired) and type DT_DIR", - "'..' entry has the parent directory's inode number and type DT_DIR; for root '/' the parent is itself", - "Existing readdir tests still pass (they may need updating if they assert exact entry counts)", - "Add test: readdir('/tmp') includes '.', '..', and any files in /tmp", - "Add test: readdir('/') has '..' pointing to itself", + "Implement sendto(fd, buf, addr, port) and recvfrom(fd, buf) with source address reporting", + "UDP sockets do not use the TCP connection state machine", + "Driver-side integration uses Node.js dgram for host-backed UDP where required", + "Add real kernel integration tests for UDP sendto/recvfrom behavior through the actual kernel path", "Tests pass", "Typecheck passes" ], "priority": 49, - "passes": true, - "notes": "in-memory-fs.ts listDirEntries() at lines ~43-74 builds entries from the files/dirs Maps but never adds '.' or '..'. Many POSIX programs and test suites expect these." + "passes": false, + "notes": "Reference: docs-internal/posix-gaps-audit.md. Keep the kernel transport semantics separate from later wasi-ext/C wiring." }, { "id": "US-050", - "title": "Implement O_EXCL and O_TRUNC in kernel fdOpen", - "description": "As a developer, I need O_EXCL and O_TRUNC flags honored by fdOpen so file creation and truncation match POSIX semantics.", + "title": "Implement socket options and per-call send/recv flags", + "description": "As a developer, I need basic setsockopt/getsockopt and per-call flags across socket types.", "acceptanceCriteria": [ - "In packages/core/src/kernel/kernel.ts or fd-table.ts: when O_CREAT | O_EXCL is set and the file already exists, return EEXIST error", - "When O_TRUNC is set and the file exists, truncate file contents to zero bytes on open", - "O_EXCL without O_CREAT is ignored (POSIX behavior)", - "Add tests: (1) O_CREAT|O_EXCL on new file succeeds, (2) O_CREAT|O_EXCL on existing file returns EEXIST, (3) O_TRUNC truncates existing file, (4) O_TRUNC on new file with O_CREAT creates empty file", + "Add a per-socket options map in the kernel with setsockopt/getsockopt support", + "Support at minimum SO_REUSEADDR, SO_KEEPALIVE, TCP_NODELAY, SO_RCVBUF, and SO_SNDBUF", + "TCP_NODELAY passes through to host TCP sockets where applicable", + "SO_REUSEADDR affects bind behavior for server sockets", + "Implement MSG_PEEK and MSG_DONTWAIT for send/recv style operations", + "Add real kernel integration tests for socket options and per-call flags across supported socket types", "Tests pass", "Typecheck passes" ], "priority": 50, - "passes": true, - "notes": "O_EXCL (0o200) and O_TRUNC (0o1000) are defined as constants in types.ts but fdOpen never checks them. The open() method in fd-table.ts line ~91 only handles O_CLOEXEC." + "passes": false, + "notes": "This story enhances the socket types built by the TCP, AF_UNIX, and UDP stories." }, { "id": "US-051", - "title": "Implement blocking flock with WaitQueue", - "description": "As a developer, I need flock() to block when a conflicting lock is held instead of returning EAGAIN, using the kernel's WaitQueue.", + "title": "Populate /proc entries for process and FD introspection", + "description": "As a developer, I need core /proc entries so programs can inspect process metadata and open descriptors.", "acceptanceCriteria": [ - "In packages/core/src/kernel/file-lock.ts: add a WaitQueue (from kernel/wait.ts) to each lock entry", - "When flock() detects a conflict and nonBlocking is false, enqueue a WaitHandle and await it instead of returning EAGAIN", - "When a lock is released (unlock), wake one waiter from the WaitQueue so the next flock() caller acquires the lock", - "Blocking flock with a timeout: use WaitHandle timeout to implement POSIX-like behavior", - "Non-blocking flock (LOCK_NB) still returns EAGAIN immediately on conflict", - "Add tests: (1) process A holds exclusive lock, process B flock() blocks until A unlocks, (2) LOCK_NB returns EAGAIN, (3) multiple waiters are served FIFO", + "Populate /proc/self/exe with the process binary path via symlink or readable entry semantics", + "Populate /proc/self/cwd as a symlink-like entry to the current working directory", + "Populate /proc/self/environ with process environment variables", + "Populate /proc/self/fd as a dynamic directory listing current open FD numbers", + "The implementation is provided by the real kernel/device layer rather than test-only scaffolding", + "Add integration tests covering the above /proc entries through the actual kernel", "Tests pass", "Typecheck passes" ], "priority": 51, - "passes": true, - "notes": "file-lock.ts line ~60 currently throws EAGAIN on conflict even when nonBlocking is false, with comment 'Blocking not implemented'. WaitQueue from US-001 is the intended mechanism." - }, - { - "id": "US-052", - "title": "Implement blocking pipe write with WaitQueue", - "description": "As a developer, I need pipe write() to block when the buffer is full instead of returning EAGAIN, using the kernel's WaitQueue.", - "acceptanceCriteria": [ - "In packages/core/src/kernel/pipe-manager.ts: add writeWaiters WaitQueue to pipe state", - "When write() detects buffer full (currentSize + data.length > MAX_PIPE_BUFFER_BYTES) and pipe is blocking, enqueue a WaitHandle and await it instead of returning EAGAIN", - "When read() consumes data from the buffer, wake one write waiter so the blocked write can proceed", - "Non-blocking pipes (O_NONBLOCK) still return EAGAIN immediately when buffer is full", - "Partial writes: if only N bytes fit, write N bytes, wake reader, then block for the remainder", - "Add tests: (1) write to full pipe blocks until reader drains, (2) non-blocking pipe write returns EAGAIN, (3) partial write then block", - "Tests pass", - "Typecheck passes" - ], - "priority": 52, - "passes": true, - "notes": "pipe-manager.ts lines ~106-108 return EAGAIN when buffer is full regardless of blocking mode. WaitQueue from US-001 is the intended mechanism. Read waiters already exist (readWaiters) but write waiters do not." - }, - { - "id": "US-053", - "title": "Implement true poll timeout -1 infinite blocking", - "description": "As a developer, I need poll() with timeout -1 to block indefinitely until an FD becomes ready, not cap at 30 seconds.", - "acceptanceCriteria": [ - "In packages/wasmvm/src/driver.ts netPoll handler: when timeout < 0, loop with WaitQueue waits instead of capping at 30s", - "Each iteration checks all polled FDs for readiness; if none ready, re-enter wait", - "When any polled FD becomes ready (data arrives, connection accepted, pipe written), the wait is woken", - "poll() with timeout 0 still returns immediately (non-blocking poll)", - "poll() with timeout > 0 still uses the specified timeout in milliseconds", - "Add test to packages/wasmvm/test/: poll with timeout -1 on a pipe, write to pipe from another process, verify poll returns", - "Tests pass", - "Typecheck passes" - ], - "priority": 53, - "passes": true, - "notes": "driver.ts line ~1136 sets waitMs=30000 when timeout<0. This means long-running WASM programs using poll(-1) will spuriously wake every 30s. The fix should use WaitQueue wake notifications from socket/pipe data arrival." - }, - { - "id": "US-054", - "title": "Populate /proc filesystem with basic entries", - "description": "As a developer, I need /proc populated with standard entries so programs that read /proc/self/* work correctly.", - "acceptanceCriteria": [ - "In packages/core/src/kernel/kernel.ts: populate /proc during kernel init with a proc device layer", - "/proc/self is a symlink-like entry that resolves to /proc/", - "/proc/self/fd/ lists open file descriptors for the current process (from kernel ProcessFDTable)", - "/proc/self/exe is a symlink or readable entry returning the process binary path", - "/proc/self/cwd contains the current working directory path", - "/proc/self/environ contains environment variables (or empty if sandboxed)", - "Reading /proc/self/fd/ returns info about that FD", - "Add tests: (1) readdir /proc/self/fd returns open FD numbers, (2) readlink /proc/self/fd/0 returns stdin path, (3) readFile /proc/self/cwd returns cwd", - "Tests pass", - "Typecheck passes" - ], - "priority": 54, - "passes": true, - "notes": "kernel.ts line ~148 creates /proc as an empty directory. No proc entries are populated. Programs that check /proc/self/fd or /proc/self/cwd fail. This needs a virtual device layer that generates content dynamically from kernel state." - }, - { - "id": "US-055", - "title": "Implement SA_RESETHAND (one-shot signal handler)", - "description": "As a developer, I need SA_RESETHAND support so signal handlers can be automatically reset to SIG_DFL after first invocation.", - "acceptanceCriteria": [ - "Add SA_RESETHAND constant (0x80000000) to packages/core/src/kernel/types.ts alongside existing SA_RESTART", - "In process-table.ts signal delivery: when handler has SA_RESETHAND flag, reset handler to SIG_DFL after invoking it once", - "sigaction() accepts SA_RESETHAND flag and stores it on the handler", - "SA_RESETHAND + SA_RESTART can be combined (both flags honored)", - "Add tests to packages/core/test/kernel/signal-handlers.test.ts: (1) handler with SA_RESETHAND fires once then reverts to default, (2) second delivery of same signal uses default action, (3) SA_RESETHAND | SA_RESTART works", - "Tests pass", - "Typecheck passes" - ], - "priority": 55, - "passes": true, - "notes": "SA_RESETHAND is a POSIX sigaction flag for one-shot handlers. The spec lists it alongside SA_RESTART. US-020 implemented sigaction but only SA_RESTART flag — SA_RESETHAND was missed." - }, - { - "id": "US-056", - "title": "Finish Node.js ESM parity for exec(), import conditions, and dynamic import failures", - "description": "As a developer, I need SecureExec's Node runtime to execute ESM entrypoints with Node-like semantics so package exports, type=module, built-in ESM imports, and dynamic import all behave correctly inside the sandbox.", - "acceptanceCriteria": [ - "Verify and keep passing the ESM runtime-driver tests for: package exports/import entrypoints, deep ESM import chains, 1000-module graphs, package type module .js entrypoints, Node built-in ESM imports, and dynamic import success paths", - "exec(code, { filePath: '/entry.mjs' }) runs the entry as ESM instead of compiling it as CommonJS", - "ESM resolution uses import conditions for V8 module loading, while require() inside the same execution still uses require conditions", - "Built-in ESM imports like node:fs and node:path expose both default and named exports", - "Dynamic import success paths pass in sandbox for relative .mjs modules, including namespace caching on repeated imports", - "Dynamic import error paths pass in sandbox for missing module, syntax error, and evaluation error cases with non-zero exit codes and preserved error messages", - "Run pnpm exec vitest run tests/runtime-driver/node/index.test.ts with the ESM/dynamic-import-focused filter and record the first concrete failing case if any remain", - "Typecheck passes" - ], - "priority": 56, - "passes": true, - "notes": "Verified in this branch on 2026-03-24: the focused runtime-driver slice now passes for ESM entry execution, package exports, type=module .js entrypoints, built-in ESM imports, successful dynamic imports, and dynamic-import missing-module/syntax/evaluation error paths. The remaining gap was closed by propagating async entrypoint rejections through the native V8 exec path and resolving dynamic imports with import conditions." - }, - { - "id": "US-057", - "title": "Fix top-level await semantics for Node.js ESM execution", - "description": "As a developer, I need top-level await in sandboxed ESM to block execution until completion so modules with long async startup behave like Node.js.", - "acceptanceCriteria": [ - "Add focused runtime-driver coverage for top-level await in entry modules and transitive imported modules", - "An ESM entrypoint with top-level await does not return early before the awaited work completes", - "Dynamic import of a module that contains top-level await waits for that module's completion before resolving", - "Long-running awaited work respects cpuTimeLimitMs and surfaces timeout errors correctly", - "Document the final behavior in docs-internal/friction.md and remove or update the existing top-level-await friction note when fixed", - "Run the targeted top-level-await tests through the SecureExec sandbox, not host Node.js", - "Typecheck passes" - ], - "priority": 57, - "passes": true, - "notes": "This is the follow-up for the long-standing 'ESM + top-level await can return early' runtime gap. The user request said 'top-level weights'; treated here as top-level await." + "passes": false, + "notes": "Lower urgency than the core POSIX data path work, but required for many userland tools." } ] } diff --git a/scripts/ralph/progress.txt b/scripts/ralph/progress.txt index f9451399..2017a0ea 100644 --- a/scripts/ralph/progress.txt +++ b/scripts/ralph/progress.txt @@ -1,1268 +1,126 @@ ## Codebase Patterns -- Native V8 `StreamEvent` payloads are not always V8-serialized; `native/v8-runtime/src/stream.rs` must fall back to UTF-8 JSON/string decoding or timer/stream dispatch callbacks can stall silently -- Keep `MODULE_RESOLVE_STATE` alive until async ESM execution fully finalizes; native top-level await plus dynamic `import()` still needs the bridge context and module cache after `execute_module()` first returns -- `packages/v8/src/runtime.ts` prefers `native/v8-runtime/target/release/secure-exec-v8` over debug builds, so rebuild the release binary before validating native V8 runtime changes through package tests -- After editing `packages/core/isolate-runtime/src/inject/*`, regenerate `packages/core/src/generated/isolate-runtime.ts` via `node packages/nodejs/scripts/build-isolate-runtime.mjs` before running Node runtime tests -- Bridge handler callbacks that need optional dispatch arguments should accept them explicitly; do not inspect extra bridge-call args through `arguments` inside arrow functions -- In `ProcessTable` signal delivery, apply one-shot disposition resets before `deliverPendingSignals()` so a same-signal delivery queued during the handler observes `SIG_DFL` instead of reusing the old callback -- Keep procfs state canonical in `packages/core/src/kernel/proc-layer.ts` as `/proc/` entries, and resolve `/proc/self` only in per-process runtime/VFS adapters where the current PID is known -- Cross-package tests that import workspace packages like `@secure-exec/core` execute the built `dist` output; rebuild the changed package with `pnpm turbo run build --filter=` before Vitest runs or you'll exercise stale JS -- `FileLockManager.flock()` is async; keep blocking advisory locks bounded with a timed `WaitQueue` retry loop and wake the next waiter from every last-reference unlock path -- For bounded blocking producers like `PipeManager.write()`, commit any bytes that fit before enqueueing a `WaitQueue`, and wake blocked writers from both drain paths and close paths so waits cannot hang -- `KernelInterface.fdOpen()` is synchronous, so open-time file semantics must go through sync-capable VFS hooks threaded through device/permission wrappers instead of async read/write fallbacks -- When `InMemoryFileSystem` exposes POSIX-only `.` / `..` directory entries, keep Node semantics by filtering them in `packages/nodejs/src/bridge-handlers.ts` before they reach `fs.readdir()` -- Kernel-owned `InMemoryFileSystem` instances must be rebound to `kernel.inodeTable` via `setInodeTable(...)` before device/permission wrapping; deferred-unlink FD I/O should use raw inode helpers (`readFileByInode`, `writeFileByInode`, `statByInode`) instead of pathname lookups -- `PtyManager` raw-mode bulk input still applies `icrnl`; translate the whole chunk before `deliverInput()` so oversized writes fail atomically with `EAGAIN` instead of partially buffering data -- Deferred unlink in `InMemoryFileSystem` must keep only live path → inode entries; open FDs survive unlink via `FileDescription.inode` and inode-backed reads, not by leaving removed pathnames accessible -- Any open-FD file I/O path in `KernelImpl` must stay description-based (`readDescriptionFile` / `writeDescriptionFile` / `preadDescription`) rather than path-based VFS calls, or deferred-unlink behavior regresses for `pread`/`pwrite`-style operations -- `SocketTable.connect()` must accept sockets already in `bound` state so WasmVM/libc callers can bind first, then use `getsockname()`/`getpeername()` with stable local addresses -- When `SocketTable.bind()` assigns a kernel ephemeral port for `port: 0`, keep a `requestedEphemeralPort` marker on the socket so external `listen(..., { external: true })` can still delegate `port: 0` to the host adapter before rewriting `localAddr` to the real host-assigned port -- Signal-aware blocking socket waits should use `ProcessSignalState.signalWaiters` plus `deliverySeq/lastDeliveredFlags`; wire `SocketTable` with `getSignalState` from the shared `ProcessTable` instead of open-coding runtime-specific signal polling -- Non-blocking external socket connect should reject with `EINPROGRESS` immediately but leave the kernel socket in a transient `connecting` state and finish `hostAdapter.tcpConnect()` in the background -- WasmVM `host_net` socket/domain constants coming from wasi-libc bottom-half do not match `packages/core` socket constants; normalize them at the WasmVM driver boundary before calling `kernel.socketTable` -- WasmVM `host_net` socket option payloads cross the worker RPC boundary as little-endian byte buffers; decode/encode them in `packages/wasmvm/src/driver.ts` and keep `packages/wasmvm/src/kernel-worker.ts` as a thin memory marshal layer -- In `packages/wasmvm/src/kernel-worker.ts`, socket FDs must be allocated in the worker-local `FDTable` and mapped through `localToKernelFd` — returning raw kernel socket IDs collides with stdio FDs and breaks close/flush behavior -- Cooperative WasmVM signal delivery during `poll_oneoff` sleep needs a periodic hook back through RPC; pure `Atomics.wait()` sleeps do not observe pending kernel signals -- When adding bridge globals that are called directly from the bridge IIFE, update all three inventories together: `packages/*/src/bridge-contract.ts`, `packages/core/src/shared/global-exposure.ts`, and `native/v8-runtime/src/session.rs` (`SYNC_BRIDGE_FNS` / `ASYNC_BRIDGE_FNS`) -- In `native/v8-runtime`, sync bridge calls must only consume `BridgeResponse` frames for their own `call_id`; defer mismatched responses back to the session event loop or sync calls will steal async promise results -- Host-side loopback access for sandbox HTTP servers is gated through `createDefaultNetworkAdapter().__setLoopbackPortChecker(...)`; keep the checker aligned with the active kernel-backed HTTP server set rather than reviving driver-level owned-port maps -- Standalone `NodeExecutionDriver` already provisions an internal `SocketTable` with `createNodeHostNetworkAdapter()`; do not reintroduce `NetworkAdapter.httpServerListen/httpServerClose` for loopback server tests — use sandbox `http.createServer()` plus `initialExemptPorts` or the loopback checker hook when a host-side request must reach the sandbox listener -- Node's default network adapter exposes an internal `__setLoopbackPortChecker` hook; NodeExecutionDriver must wire it before `wrapNetworkAdapter()` so host-side fetch/httpRequest can reach kernel-owned loopback listeners without reviving `ownedServerPorts` -- For new Node bridge operations that need kernel-backed host state but not a new native bridge function, route them through `_loadPolyfill` `__bd:` dispatch handlers; reserve new runtime globals for host-to-isolate event dispatch like `_timerDispatch` -- Kernel implementation lives in packages/core/src/kernel/ — KernelImpl is the main class -- UDP and TCP use separate binding maps in SocketTable (listeners for TCP, udpBindings for UDP) — same port can be used by both protocols -- Kernel tests go in packages/core/test/kernel/ -- WasmVM WASI extensions are declared in native/wasmvm/crates/wasi-ext/src/lib.rs -- C sysroot patches for WasmVM are in native/wasmvm/patches/wasi-libc/ -- WasmVM kernel worker is packages/wasmvm/src/kernel-worker.ts, driver is packages/wasmvm/src/driver.ts -- Node.js bridge is in packages/nodejs/src/bridge/, driver in packages/nodejs/src/driver.ts -- Bridge handlers not in the Rust V8 SYNC_BRIDGE_FNS array are dispatched through _loadPolyfill via BRIDGE_DISPATCH_SHIM in execution-driver.ts -- To add new bridge globals: (1) add key to HOST_BRIDGE_GLOBAL_KEYS in bridge-contract.ts, (2) add handler to dispatch handlers in execution-driver.ts, (3) use _globalName.applySyncPromise(undefined, args) in bridge code -- FD table is managed on the host side via kernel ProcessFDTable (FDTableManager from @secure-exec/core) — bridge/fs.ts delegates FD ops through bridge dispatch -- After modifying bridge/fs.ts, run `pnpm turbo run build --filter=@secure-exec/nodejs` to rebuild the bridge IIFE before running tests -- Node conformance tests are in packages/secure-exec/tests/node-conformance/ -- PATCHED_PROGRAMS in native/wasmvm/c/Makefile must include programs using host_process or host_net imports -- DnsCache is in packages/core/src/kernel/dns-cache.ts, exported from index.ts; uses lazy TTL expiry on lookup -- Use vitest for tests, pnpm for package management, turbo for builds -- The spec for this work is at docs-internal/specs/kernel-consolidation.md -- WaitHandle and WaitQueue are exported from packages/core/src/kernel/wait.ts and re-exported from index.ts -- Run tests from repo root with: pnpm vitest run -- Run typecheck from package dir with: pnpm tsc --noEmit -- InodeTable is in packages/core/src/kernel/inode-table.ts, exported from index.ts -- Host adapter interfaces (HostNetworkAdapter, HostSocket, etc.) are in packages/core/src/kernel/host-adapter.ts, type-exported from index.ts -- SocketTable is in packages/core/src/kernel/socket-table.ts, exported from index.ts along with KernelSocket type and socket constants (AF_INET, SOCK_STREAM, etc.) -- SocketTable has a private `listeners` Map (addr key → socket ID) for port reservation and routing; addrKey() is exported for address key formatting -- findListener() checks exact match first, then wildcard 0.0.0.0 and :: — used by connect() for loopback routing -- findBoundUdp() is public on SocketTable — same lookup pattern as findListener but for UDP bindings; used by tests to poll for UDP server readiness -- EADDRINUSE was added to KernelErrorCode in types.ts for socket address conflicts -- connect() creates a server-side socket paired via peerId and queues it in listener's backlog; send/recv use peerId to route data -- destroySocket() clears peerId on peer and wakes its readWaiters for EOF propagation -- consumeFromBuffer() handles partial chunk reads for recv() with maxBytes limit -- ECONNREFUSED and ENOTCONN were added to KernelErrorCode in types.ts -- Half-close uses peerWriteClosed flag on KernelSocket — shutdown('write') sets it on the peer, recv() checks it for EOF detection -- State composition: shutdown methods check current state (read-closed/write-closed) and transition to closed when both halves are shut -- Socket options use optKey(level, optname) → "level:optname" composite keys in the options Map; use setsockopt/getsockopt methods, not direct Map access -- Socket flags (MSG_PEEK, MSG_DONTWAIT, MSG_NOSIGNAL) are bitmask values matching Linux constants; use bitwise AND to check -- SocketTable accepts optional `networkCheck` in constructor for permission enforcement; loopback connect always bypasses checks -- KernelSocket has `external?: boolean` flag for tracking host-adapter-connected sockets (used by send() permission check) -- SocketTable accepts optional `hostAdapter` (HostNetworkAdapter) in constructor for external connection routing -- connect() is async (returns Promise) — all existing tests must use await; loopback path is synchronous inside the async function -- External sockets have `hostSocket?: HostSocket` on KernelSocket — send() writes to hostSocket, a background read pump feeds readBuffer -- destroySocket() calls hostSocket.close() for external sockets -- Mock host adapter pattern: MockHostSocket with pushData()/pushEof() for controlling read pump in tests -- MockHostListener with pushConnection() for simulating incoming external TCP connections in tests -- bind() is async (Promise) like connect() and listen() — all callers must await; sync throw tests use .rejects.toThrow() -- SocketTable accepts optional `vfs` (VirtualFileSystem) in constructor for Unix domain socket file management -- InMemoryFileSystem.chmod() accepts explicit type bits (e.g. S_IFSOCK | 0o755) — if mode & 0o170000 is non-zero, type bits are used directly -- listen() is async (Promise) — all callers must use await; expect(...).toThrow must become await expect(...).rejects.toThrow -- resource-exhaustion.test.ts and kernel-integration.test.ts stdin streaming tests have pre-existing flaky failures — not related to socket work -- Net socket bridge handlers support kernel routing via optional socketTable + pid deps; fallback to direct net.Socket when not provided -- KernelOptions accepts optional hostNetworkAdapter — wired to SocketTable for external connection routing -- KernelInterface exposes socketTable — available to runtime drivers via init(kernel) callback -- SocketTable.close() requires BOTH socketId AND pid for per-process ownership check -- NodeExecutionDriverOptions accepts optional socketTable + pid for kernel socket routing -- NetworkAdapter interface no longer has netSocket* methods — bridge handlers handle all TCP socket operations -- buildNetworkBridgeHandlers returns { handlers, dispose } (NetworkBridgeResult) — kernel HTTP servers need async cleanup -- http.Server + emit('connection', duplexStream) pattern feeds kernel socket data through Node HTTP parser without real TCP -- KernelSocketDuplex wraps kernel sockets as stream.Duplex — needs socket-like props (remoteAddress, setNoDelay, etc.) for http module -- SSRF loopback exemption uses socketTable.findListener() — kernel-aware, no manual port tracking needed -- assertNotPrivateHost/isPrivateIp/isLoopbackHost are in bridge-handlers.ts for kernel-aware SSRF validation -- processTable exposed on KernelInterface — wired through execution-driver to bridge handlers -- wrapAsDriverProcess() adapts SpawnedProcess to kernel DriverProcess (adds null callback stubs) -- childProcessInstances Map in bridge/child-process.ts is event routing only — kernel tracks process state -- WasmVM socket ops route through kernel.socketTable (create/connect/send/recv/close) — hostAdapter handles real TCP -- WasmVM TLS-upgraded sockets bypass kernel recv via _tlsSockets Map — TLS upgrade detaches kernel read pump -- WaitHandle timeout goes in WaitQueue.enqueue(timeoutMs), not WaitHandle.wait() — wait() takes no args -- Test mock kernel: createMockKernel() with SocketTable + TestHostSocket using real node:net — in packages/wasmvm/test/net-socket.test.ts -- Cooperative signal delivery: driver piggybacking via SIG_IDX_PENDING_SIGNAL in SAB, worker calls __wasi_signal_trampoline -- proc_sigaction RPC: action 0=SIG_DFL, 1=SIG_IGN, 2=user handler (C side holds function pointer) -- C sysroot signal handling: signal() + __wasi_signal_trampoline in 0011-sigaction.patch -- Kernel public API: Kernel interface has no kill(pid,signal) — use ManagedProcess.kill() from spawn(), or kernel.processTable internally - -## 2026-03-24 22:12 PDT - US-050 -- What was implemented -- Added synchronous open-time flag handling in `KernelImpl.fdOpen()` for `O_CREAT`, `O_EXCL`, and `O_TRUNC`, with wrapper passthroughs in the device and permission layers -- Added `prepareOpenSync()` support to the in-memory and Node-backed VFS adapters so `fdOpen()` can create empty files, reject `O_CREAT|O_EXCL` on existing paths, and truncate existing files before the descriptor is allocated -- Added kernel integration coverage for `O_CREAT|O_EXCL`, `O_TRUNC`, `O_TRUNC|O_CREAT`, and the `O_EXCL`-without-`O_CREAT` no-op case; updated the kernel contract and root agent instructions with the sync-open rule -- Files changed -- `.agent/contracts/kernel.md` -- `CLAUDE.md` -- `packages/core/src/kernel/device-layer.ts` -- `packages/core/src/kernel/kernel.ts` -- `packages/core/src/kernel/permissions.ts` -- `packages/core/src/shared/in-memory-fs.ts` -- `packages/core/test/kernel/helpers.ts` -- `packages/core/test/kernel/kernel-integration.test.ts` -- `packages/nodejs/src/driver.ts` -- `packages/nodejs/src/module-access.ts` -- `packages/nodejs/src/os-filesystem.ts` -- `scripts/ralph/prd.json` -- `scripts/ralph/progress.txt` -- **Learnings for future iterations:** -- Patterns discovered -- `fdOpen()` now depends on `prepareOpenSync()` passthroughs; if a filesystem gets wrapped and drops that hook, `O_CREAT`/`O_EXCL`/`O_TRUNC` will silently regress back to lazy-open behavior -- Gotchas encountered -- Once `O_CREAT` starts materializing files at open time, deferred umask handling can no longer key off a read miss in `vfsWrite()`; it has to key off the descriptor’s `creationMode` marker instead -- Useful context -- Validation for this story passed with `pnpm tsc --noEmit -p packages/core/tsconfig.json`, `pnpm tsc --noEmit -p packages/nodejs/tsconfig.json`, `pnpm vitest run packages/core/test/kernel/kernel-integration.test.ts -t "O_CREAT|O_EXCL|O_TRUNC|umask"`, `pnpm vitest run packages/core/test/kernel/inode-table.test.ts`, and `pnpm vitest run packages/core/test/kernel/unix-socket.test.ts` ---- - -## 2026-03-25 00:09 PDT - US-057 -- What was implemented -- Fixed native V8 ESM top-level-await finalization so entry modules stay pending until their evaluation promise settles, including timer-driven async startup and transitive async imports -- Added native dynamic `import()` handling for ESM via V8's host dynamic-import callback, reusing the existing module resolver/cache and mapping async evaluation back to the imported module namespace -- Fixed native stream-event payload decoding to accept raw UTF-8 JSON/string payloads so kernel timer callbacks reach `_timerDispatch`, then added focused sandbox runtime-driver coverage for entrypoint TLA, transitive imported-module TLA, dynamic-import TLA, and timeout behavior -- Files changed -- `.agent/contracts/node-runtime.md` -- `docs-internal/friction.md` -- `native/v8-runtime/src/execution.rs` -- `native/v8-runtime/src/isolate.rs` -- `native/v8-runtime/src/session.rs` -- `native/v8-runtime/src/stream.rs` -- `packages/secure-exec/tests/runtime-driver/node/index.test.ts` -- `scripts/ralph/prd.json` -- `scripts/ralph/progress.txt` -- **Learnings for future iterations:** -- Patterns discovered -- Native V8 async ESM completion is a two-part problem: keep the entry-module promise alive across the session event loop, and keep module-resolution state alive long enough for later native dynamic imports to reuse the same bridge context/cache -- Host-to-isolate timer events are emitted as raw JSON bytes, not V8-serialized values; the native stream dispatcher has to parse both formats or TLA/timer flows will hang waiting for `_timerDispatch` -- Gotchas encountered -- The `v8` crate version in this workspace expects `set_host_import_module_dynamically_callback` handlers with a `HandleScope` signature, not the newer `Context`-first callback shape shown in newer crate docs -- Useful context -- Focused green checks for this story were `pnpm tsc --noEmit -p packages/secure-exec/tsconfig.json`, `cargo test execution::tests::v8_consolidated_tests -- --nocapture`, `cargo build --release` in `native/v8-runtime`, and `pnpm exec vitest run packages/secure-exec/tests/runtime-driver/node/index.test.ts -t "dynamic import|top-level await"` ---- - -## 2026-03-23 - US-001 -- Implemented WaitHandle and WaitQueue primitives in packages/core/src/kernel/wait.ts -- WaitHandle: Promise-based wait/wake with optional timeout, timedOut flag, isSettled guard -- WaitQueue: FIFO queue with enqueue(), wakeOne(), wakeAll(), pending count, clear() -- Files changed: packages/core/src/kernel/wait.ts (new), packages/core/src/kernel/index.ts (exports), packages/core/test/kernel/wait-queue.test.ts (new, 13 tests) -- **Learnings for future iterations:** - - Kernel managers follow a consistent pattern: private state Maps, KernelError throws, public methods - - Existing waiter pattern in PipeManager/ProcessTable uses raw resolver arrays — WaitQueue provides the unified replacement - - git add must be run from repo root, not a subdirectory - - Typecheck for core package: `cd packages/core && pnpm tsc --noEmit` ---- - -## 2026-03-23 - US-002 -- Implemented InodeTable with refcounting and deferred unlink in packages/core/src/kernel/inode-table.ts -- Inode struct: ino, nlink, openRefCount, mode, uid, gid, size, timestamps -- InodeTable: allocate, get, incrementLinks/decrementLinks, incrementOpenRefs/decrementOpenRefs, shouldDelete, delete -- Deferred deletion: nlink=0 with open FDs keeps inode alive until last FD closes -- Files changed: packages/core/src/kernel/inode-table.ts (new), packages/core/src/kernel/index.ts (exports), packages/core/test/kernel/inode-table.test.ts (new, 17 tests) -- **Learnings for future iterations:** - - InodeTable and Inode are exported from index.ts (InodeTable as value, Inode as type) - - Inode starts with nlink=1 on allocate (matching POSIX: creating a file = one directory entry) - - ctime is updated on link/unlink operations per POSIX - - KernelError codes available: ENOENT for missing inode, EINVAL for underflow guards ---- - -## 2026-03-23 - US-003 -- Implemented HostNetworkAdapter, HostSocket, HostListener, HostUdpSocket, DnsResult interfaces in packages/core/src/kernel/host-adapter.ts -- Added type exports to packages/core/src/kernel/index.ts -- Files changed: packages/core/src/kernel/host-adapter.ts (new), packages/core/src/kernel/index.ts (exports) -- **Learnings for future iterations:** - - Host adapter interfaces are type-only exports (no runtime code) — they live in kernel/host-adapter.ts - - DnsResult is a separate interface (address + family: 4|6) used by dnsLookup - - HostSocket.read() returns null for EOF, matching the kernel recv() convention - - HostListener.port is readonly — needed for ephemeral port (port 0) allocation ---- - -## 2026-03-23 - US-004 -- Implemented KernelSocket struct and SocketTable class in packages/core/src/kernel/socket-table.ts -- KernelSocket: id, domain, type, protocol, state, nonBlocking, localAddr, remoteAddr, options, pid, readBuffer, readWaiters, backlog, acceptWaiters, peerId -- SocketTable: create, get, close, poll, closeAllForProcess, disposeAll -- Per-process isolation: close checks pid ownership -- EMFILE limit: configurable maxSockets (default 1024) -- Socket address types: InetAddr, UnixAddr, SockAddr with type guards -- Files changed: packages/core/src/kernel/socket-table.ts (new), packages/core/src/kernel/index.ts (exports), packages/core/test/kernel/socket-table.test.ts (new, 23 tests) -- **Learnings for future iterations:** - - SocketTable follows the same pattern as InodeTable: private Map, nextId counter, requireSocket helper - - Socket state is mutable on the KernelSocket interface — higher-level operations (bind/listen/connect) set it directly - - KernelErrorCode type in types.ts needs EADDRINUSE, ECONNREFUSED, ECONNRESET, ENOTCONN, ENOTSOCK for later stories - - WaitQueue from wait.ts is used for readWaiters and acceptWaiters — close wakes all pending waiters - - backlog stores socket IDs (not KernelSocket objects) for later accept() implementation ---- - -## 2026-03-23 - US-005 -- Implemented bind(), listen(), accept(), findListener() on SocketTable -- Added private `listeners` Map for port reservation and routing -- Added EADDRINUSE to KernelErrorCode -- destroySocket now cleans up listener registrations; disposeAll clears listeners -- Wildcard address matching: findListener checks exact, then 0.0.0.0, then :: for the port -- EADDRINUSE checks wildcard overlap (0.0.0.0:P conflicts with 127.0.0.1:P and vice versa) -- SO_REUSEADDR on the binding socket bypasses EADDRINUSE -- addrKey() exported as module-level helper for "host:port" or unix path keys -- Files changed: packages/core/src/kernel/types.ts (EADDRINUSE), packages/core/src/kernel/socket-table.ts (bind/listen/accept/findListener), packages/core/src/kernel/index.ts (addrKey export), packages/core/test/kernel/socket-table.test.ts (21 new tests, 44 total) -- **Learnings for future iterations:** - - bind() registers in listeners map immediately (not just on listen) — this is for port reservation - - findListener() only matches sockets in 'listening' state, not just 'bound' - - isAddrInUse scans all listeners for wildcard overlap — O(n) but listener count is small - - accept() returns socket IDs from backlog; connect() (US-006) will push to backlog - - Tests can simulate backlog by directly pushing to socket.backlog array ---- - -## 2026-03-23 - US-006 -- Implemented loopback TCP routing: connect(), send(), recv() on SocketTable -- connect() finds listener via findListener(), creates paired server-side socket via peerId, queues in backlog -- send() writes to peer's readBuffer, wakes readWaiters -- recv() consumes from readBuffer with maxBytes limit, returns null for EOF (peer gone) or no data -- destroySocket() propagates EOF by clearing peerId on peer and waking readWaiters -- Added ECONNREFUSED and ENOTCONN to KernelErrorCode -- Files changed: packages/core/src/kernel/types.ts (ECONNREFUSED, ENOTCONN), packages/core/src/kernel/socket-table.ts (connect/send/recv/consumeFromBuffer, updated destroySocket), packages/core/test/kernel/loopback.test.ts (new, 21 tests) -- **Learnings for future iterations:** - - send() copies data (new Uint8Array(data)) to prevent caller mutations affecting kernel buffers - - consumeFromBuffer() handles partial chunk reads — splits a chunk if it exceeds maxBytes and puts remainder back - - EOF detection in recv: peerId === undefined means peer closed; readBuffer empty + peerId undefined → return null - - connect() creates the server-side socket with listener.pid as owner — the process that calls accept() gets that socket - - Tests should run from repo root: `pnpm vitest run `, not from package dir ---- - -## 2026-03-23 - US-007 -- Implemented shutdown() with half-close support on SocketTable -- shutdown('write'): sets peer's peerWriteClosed flag, peer recv() returns EOF, local send() returns EPIPE -- shutdown('read'): discards readBuffer, local recv() returns EOF immediately, local send() still works -- shutdown('both'): combines both, transitions to 'closed' -- Sequential half-close: read-closed + shutdown('write') → closed, write-closed + shutdown('read') → closed -- Updated send() to check write-closed/closed states before ENOTCONN -- Updated recv() to return null immediately for read-closed/closed states and check peerWriteClosed for EOF -- Updated poll() to reflect half-close: write-closed → writable=false, read-closed → writable=true -- Added peerWriteClosed flag to KernelSocket for tracking peer write shutdown without destroying the socket -- Files changed: packages/core/src/kernel/socket-table.ts (shutdown, shutdownWrite, shutdownRead, updated send/recv/poll, peerWriteClosed), packages/core/test/kernel/socket-shutdown.test.ts (new, 17 tests) -- **Learnings for future iterations:** - - Half-close needs a separate flag (peerWriteClosed) because the peer socket still exists — peerId check alone won't detect write shutdown - - shutdown('write') + shutdown('read') must compose: each checks current state and transitions to 'closed' if the other half is already closed - - send() must check write-closed/closed BEFORE checking connected — order matters for correct error code (EPIPE vs ENOTCONN) - - recv() on read-closed returns null without checking buffer — shutdown('read') discards unread data ---- - -## 2026-03-23 - US-008 -- Implemented socketpair() on SocketTable — creates two pre-connected sockets linked via peerId -- Both sockets start in 'connected' state, reusing existing send/recv/close/shutdown data paths -- Files changed: packages/core/src/kernel/socket-table.ts (socketpair method), packages/core/test/kernel/socketpair.test.ts (new, 13 tests) -- **Learnings for future iterations:** - - socketpair() is much simpler than connect() — no listener lookup, just create two sockets and cross-link peerId - - All existing send/recv/close/shutdown logic works unchanged for socketpair — the peerId linking is the only mechanism needed - - EMFILE limit applies to socketpair too — creating 2 sockets at once can exceed the limit after the first succeeds ---- - -## 2026-03-23 - US-009 -- Implemented setsockopt() and getsockopt() methods on SocketTable -- Added socket option constants: SOL_SOCKET, IPPROTO_TCP, SO_REUSEADDR, SO_KEEPALIVE, SO_RCVBUF, SO_SNDBUF, TCP_NODELAY - -- Added optKey() helper for canonical "level:optname" option keys -- Enforced SO_RCVBUF: send() throws EAGAIN when peer's readBuffer exceeds the limit -- Updated isAddrInUse() to use the new optKey format for SO_REUSEADDR check -- Updated existing tests that set SO_REUSEADDR directly on the options Map to use setsockopt() -- Files changed: packages/core/src/kernel/socket-table.ts (setsockopt/getsockopt, optKey, SO_RCVBUF enforcement, constants), packages/core/src/kernel/index.ts (new exports), packages/core/test/kernel/socket-table.test.ts (10 new tests, 54 total) -- **Learnings for future iterations:** - - Socket options use composite "level:optname" keys in the options Map — use optKey() helper, not raw string keys - - SO_RCVBUF enforcement is in send() on the peer socket, not recv() on the local socket — the peer's receive buffer is what gets checked - - When changing internal option key format, search all test files for direct options Map usage and update them - - resource-exhaustion.test.ts has pre-existing flaky failures unrelated to socket work ---- - -## 2026-03-23 - US-010 -- Implemented MSG_PEEK, MSG_DONTWAIT, MSG_NOSIGNAL socket flags -- MSG_PEEK: peekFromBuffer() reads data without consuming — returns a copy so mutations don't affect the buffer -- MSG_DONTWAIT: throws EAGAIN when no data available (but still returns null for EOF) -- MSG_NOSIGNAL: suppresses SIGPIPE — throws EPIPE with MSG_NOSIGNAL marker in message -- Flags are bitmask-combined (MSG_PEEK | MSG_DONTWAIT works) -- Files changed: packages/core/src/kernel/socket-table.ts (MSG constants, peekFromBuffer, recv/send flag handling), packages/core/src/kernel/index.ts (exports), packages/core/test/kernel/socket-flags.test.ts (new, 13 tests) -- **Learnings for future iterations:** - - peekFromBuffer() must return a copy (new Uint8Array) not a subarray reference — otherwise callers can corrupt the kernel buffer - - MSG_DONTWAIT should only throw EAGAIN when no data AND no EOF condition — EOF still returns null - - Linux MSG_* flag values: MSG_PEEK=0x2, MSG_DONTWAIT=0x40, MSG_NOSIGNAL=0x4000 — match Linux constants for compatibility ---- -## 2026-03-24 22:20 PDT - US-051 -- Implemented blocking advisory `flock()` with per-path `WaitQueue`s and bounded timed waits in `FileLockManager` -- Converted kernel `flock` to async `Promise` semantics and updated the core kernel contract for blocking/FIFO lock behavior -- Added coverage for blocking unlock wakeup, `LOCK_NB` conflict handling, FIFO waiter ordering, and adjusted kernel integration to keep the mock process alive while awaiting lock operations -- Files changed: `.agent/contracts/kernel.md`, `packages/core/src/kernel/file-lock.ts`, `packages/core/src/kernel/kernel.ts`, `packages/core/src/kernel/types.ts`, `packages/core/test/kernel/file-lock.test.ts` -- **Learnings for future iterations:** - - Async kernel syscalls can expose existing test timing races; `MockRuntimeDriver` needs `neverExit: true` when a test awaits multiple operations against the same PID - - For indefinite kernel waits, use timed `WaitQueue.enqueue(timeoutMs)` retries instead of a single forever-pending Promise so WasmVM/bridge callers can re-check state safely - - File-lock waiter wakeups must happen on all last-reference release paths (`LOCK_UN`, `fdClose`, `dup2` replacement, process-exit cleanup) because the kernel funnels them through `releaseByDescription()` - - `KernelInterface.flock()` now returns a `Promise`; direct tests and future bridge callers must `await` it even when the lock is uncontended ---- - -## 2026-03-23 - US-011 -- Implemented network permission checks in SocketTable: checkNetworkPermission() public method, wired into connect(), listen(), and send() ---- - -- connect() to loopback (kernel listener) always bypasses permission checks; external addresses check against configured policy -- listen() checks permission when networkCheck is configured -- send() checks permission for sockets marked as external (external flag on KernelSocket) -- Added `external?: boolean` to KernelSocket interface for host-adapter-connected socket tracking -- Files changed: packages/core/src/kernel/socket-table.ts (networkCheck option, checkNetworkPermission, connect/listen/send permission checks, external flag), packages/core/test/kernel/network-permissions.test.ts (new, 17 tests) -- **Learnings for future iterations:** - - SocketTable accepts `networkCheck` in constructor options — when set, listen() and external connect() are permission-checked - - Loopback connect (findListener returns a match) always bypasses permission — this is by design per spec - - When no networkCheck is configured, existing behavior is preserved (no enforcement) — backwards compatible - - Tests that need loopback with restricted policy must allow "listen" op but deny "connect" — denyAll breaks listener setup - - The `external` flag on KernelSocket will be set by US-012 (host adapter routing) — for now it's only used in tests - - resource-exhaustion.test.ts has pre-existing flaky failures — not related to socket/permission work ---- - -## 2026-03-23 - US-012 -- Implemented external connection routing via host adapter in SocketTable -- connect() is now async (Promise) — loopback path remains synchronous, external path awaits hostAdapter.tcpConnect() -- External sockets store hostSocket on KernelSocket; send() writes to hostSocket, background read pump feeds readBuffer -- destroySocket() calls hostSocket.close() for external sockets; closeAllForProcess propagates -- Permission check runs before host adapter call; loopback still bypasses -- Added MockHostSocket and MockHostNetworkAdapter for testing external connections -- Updated all existing test files to use async/await for connect() calls -- Files changed: packages/core/src/kernel/socket-table.ts (hostAdapter option, async connect, hostSocket on KernelSocket, send relay, startReadPump, destroySocket cleanup), packages/core/test/kernel/external-connect.test.ts (new, 14 tests), packages/core/test/kernel/loopback.test.ts (async), packages/core/test/kernel/network-permissions.test.ts (async), packages/core/test/kernel/socket-flags.test.ts (async), packages/core/test/kernel/socket-shutdown.test.ts (async), packages/core/test/kernel/socket-table.test.ts (async) -- **Learnings for future iterations:** - - Making connect() async is a breaking API change — all callers across test files must add await, test callbacks must be async - - In async functions, ALL throws become rejected Promises — try/catch without await won't catch errors; use `await expect(...).rejects.toThrow()` pattern - - The read pump runs as a fire-and-forget async loop — use pushData()/pushEof() on MockHostSocket to control timing -- When testing chunk ordering with the read pump, recv() with exact maxBytes is more reliable than assuming chunks arrive separately -- send() for external sockets fire-and-forgets the hostSocket.write() — errors are caught asynchronously and mark the socket broken ---- - -## 2026-03-24 21:39 PDT - US-048 -- Wired `KernelImpl` to own a shared `InodeTable`, bind it into `InMemoryFileSystem`, and keep open-file access alive after unlink by storing inode identity on `FileDescription` -- Refactored `packages/core/src/shared/in-memory-fs.ts` to use live path-to-inode maps plus inode-backed file storage so `stat()` returns real `ino`/`nlink`, hard links share inode state, and unlink removes pathnames without discarding open file data -- Added integration coverage in `packages/core/test/kernel/inode-table.test.ts` for real inode stats, deferred unlink with open FDs, last-close deletion, and hard-link `nlink` parity -- Updated the kernel contract and repo instructions with the deferred-unlink inode rule -- Files changed: `.agent/contracts/kernel.md`, `CLAUDE.md`, `packages/core/src/kernel/kernel.ts`, `packages/core/src/kernel/types.ts`, `packages/core/src/shared/in-memory-fs.ts`, `packages/core/test/kernel/inode-table.test.ts` -- Quality checks: `pnpm tsc --noEmit` passed in `packages/core`; `pnpm vitest run test/kernel/inode-table.test.ts` passed; full `pnpm vitest run` in `packages/core` failed in pre-existing `test/kernel/resource-exhaustion.test.ts` (`PTY adversarial stress > single large write (1MB+) — immediate EAGAIN, no partial buffering`, assertion at line 270) -- **Learnings for future iterations:** - - Deferred unlink must never keep removed pathnames reachable — regular path lookups should fail immediately, and only inode-backed FD I/O should survive until the last close - - Rebinding an existing `InMemoryFileSystem` into `KernelImpl` needs inode-table migration for pre-populated filesystems, because many tests create and seed the VFS before constructing the kernel - - Any kernel path that can implicitly close an FD (`fdClose`, `dup2`, stdio override cleanup, process-exit table teardown) must release inode open refs when the last shared `FileDescription` reference drops ---- - -## 2026-03-24 21:43 PDT - US-048 -- Patched `KernelImpl.fdPwrite()` to use inode-backed description helpers so positional writes still work after the pathname has been unlinked -- Added a regression test proving `fdPwrite` + `fdPread` continue to work on an unlinked open file while the path stays absent from the VFS -- Files changed: `packages/core/src/kernel/kernel.ts`, `packages/core/test/kernel/inode-table.test.ts`, `scripts/ralph/progress.txt` -- Quality checks: `pnpm tsc --noEmit` passed in `packages/core`; `pnpm vitest run test/kernel/inode-table.test.ts` passed; full `pnpm vitest run` in `packages/core` still fails in pre-existing `test/kernel/resource-exhaustion.test.ts` (`PTY adversarial stress > single large write (1MB+) — immediate EAGAIN, no partial buffering`, assertion at line 270) -- **Learnings for future iterations:** - - Deferred-unlink support is only correct if every FD-based read and write path goes through the `FileDescription.inode` helpers; a single direct `vfs.readFile`/`vfs.writeFile` call reintroduces pathname dependence - - Focused inode tests can pass while the broader package suite remains blocked by the unrelated PTY stress regression, so keep the full-suite command/result in the log for handoff clarity ---- - -## 2026-03-24 21:22 PDT - US-047 -- What was implemented -- Added `SocketTable.getLocalAddr()` / `getRemoteAddr()` and allowed `connect()` from `bound` sockets so bound clients can use address accessors cleanly -- Wired WasmVM address accessors end to end: `wasi-ext` host imports/wrappers, worker `host_net` handlers, driver RPC handlers, and libc `getsockname()` / `getpeername()` patching -- Added kernel/WasmVM tests plus `syscall_coverage` parity coverage entries for the new libc socket address calls -- Files changed -- `packages/core/src/kernel/socket-table.ts` -- `packages/core/test/kernel/socket-table.test.ts` -- `packages/wasmvm/src/driver.ts` -- `packages/wasmvm/src/kernel-worker.ts` -- `packages/wasmvm/test/net-socket.test.ts` -- `packages/wasmvm/test/c-parity.test.ts` -- `native/wasmvm/crates/wasi-ext/src/lib.rs` -- `native/wasmvm/patches/wasi-libc/0008-sockets.patch` -- `native/wasmvm/c/programs/syscall_coverage.c` -- `prd.json` -- **Learnings for future iterations:** -- Bound-client connect is required for libc parity: `getsockname()` on a client socket is only meaningful if `connect()` preserves a prior `bind()` -- The WasmVM address-accessor path should reuse the existing serialized address format (`host:port` or unix path) so libc parsing can keep using the shared `string_to_sockaddr()` helper -- When adding a new `host_net` import, update all four layers together: `wasi-ext` externs/wrappers, `kernel-worker` imports, `driver` RPC handlers, and the wasi-libc patch -- `syscall_coverage` is the right place to add libc-level parity checks for new WASM host imports, and `packages/wasmvm/test/c-parity.test.ts` must list the new expected markers ---- - -## 2026-03-23 - US-013 - -## 2026-03-24 21:04 PDT - US-045 -- What was implemented -- Enforced socket-level non-blocking behavior in `SocketTable`: empty `accept()` and `recv()` now fail with `EAGAIN` when `nonBlocking` is enabled -- Added `SocketTable.setNonBlocking()` as the explicit toggle API for existing sockets -- Made external non-blocking `connect()` reject with `EINPROGRESS` while the host adapter connection completes asynchronously in the background -- Added focused tests for non-blocking `recv`, non-blocking `accept`, non-blocking external `connect`, and toggling the socket mode -- Updated the kernel contract with the new non-blocking socket semantics -- Files changed -- `.agent/contracts/kernel.md` -- `packages/core/src/kernel/socket-table.ts` -- `packages/core/src/kernel/types.ts` -- `packages/core/test/kernel/external-connect.test.ts` -- `packages/core/test/kernel/socket-flags.test.ts` -- `scripts/ralph/prd.json` -- `scripts/ralph/progress.txt` -- **Learnings for future iterations:** -- Patterns discovered -- Non-blocking socket mode is best modeled as per-socket state in `SocketTable`; `MSG_DONTWAIT` remains a per-call override layered on top -- Gotchas encountered -- Because `SocketTable.connect()` is async, returning `EINPROGRESS` for non-blocking external connects means rejecting the call immediately while separately completing the host connect path in a background promise -- Useful context -- Focused validation for this story is `pnpm vitest run packages/core/test/kernel/socket-flags.test.ts packages/core/test/kernel/external-connect.test.ts packages/core/test/kernel/socket-table.test.ts` and `pnpm tsc --noEmit -p packages/core/tsconfig.json` ---- -- Implemented external server socket routing via host adapter in SocketTable -- listen() is now async (Promise) with optional `{ external: true }` parameter -- When external: calls hostAdapter.tcpListen(), stores HostListener on KernelSocket, starts accept pump -- Accept pump loops on hostListener.accept(), creates kernel sockets for each incoming connection, starts read pumps -- Ephemeral port (port 0) updates localAddr and re-registers in listeners map with actual port from HostListener.port -- destroySocket() calls hostListener.close() for external listeners; disposeAll() also cleans up host listeners -- Updated all existing test files to use async/await for listen() calls (same pattern as connect() in US-012) -- Files changed: packages/core/src/kernel/socket-table.ts (async listen, hostListener on KernelSocket, startAcceptPump, destroySocket/disposeAll cleanup), packages/core/test/kernel/external-listen.test.ts (new, 14 tests), packages/core/test/kernel/socket-table.test.ts (async listen), packages/core/test/kernel/loopback.test.ts (async), packages/core/test/kernel/socket-flags.test.ts (async), packages/core/test/kernel/socket-shutdown.test.ts (async), packages/core/test/kernel/external-connect.test.ts (async), packages/core/test/kernel/network-permissions.test.ts (async) -- **Learnings for future iterations:** - - Making listen() async follows the same pattern as connect() — all callers need await, sync throw tests need .rejects.toThrow() - - MockHostListener.pushConnection() simulates incoming connections; pushData()/pushEof() on MockHostSocket controls data flow - - Ephemeral port 0 requires re-registering in the listeners map after getting the actual port from the host listener - - Accept pump is fire-and-forget like read pump — errors stop the pump silently (listener closed) - - disposeAll should iterate sockets and close both hostSocket and hostListener before clearing the maps ---- - -## 2026-03-23 - US-014 -- Implemented UDP datagram sockets (SOCK_DGRAM) in SocketTable -- sendTo(): loopback routing via findBoundUdp(), external routing via hostAdapter.udpSend(), silent drop for unbound ports -- recvFrom(): returns { data, srcAddr } with message boundary preservation, supports MSG_PEEK and MSG_DONTWAIT -- bindExternalUdp(): async setup for external UDP via hostAdapter.udpBind() with recv pump -- Separate udpBindings map from TCP listeners — TCP and UDP can share the same port -- UdpDatagram type, MAX_DATAGRAM_SIZE (65535), MAX_UDP_QUEUE_DEPTH (128) constants -- EMSGSIZE added to KernelErrorCode for oversized datagrams -- Updated poll() to check datagramQueue for UDP readability -- Updated destroySocket/disposeAll for hostUdpSocket cleanup and udpBindings cleanup -- Files changed: packages/core/src/kernel/types.ts (EMSGSIZE), packages/core/src/kernel/socket-table.ts (sendTo/recvFrom/bindExternalUdp/findBoundUdp/isUdpAddrInUse/startUdpRecvPump, udpBindings map, updated bind/poll/destroySocket/disposeAll), packages/core/src/kernel/index.ts (new exports), packages/core/test/kernel/udp-socket.test.ts (new, 25 tests) -- **Learnings for future iterations:** - - TCP and UDP must use separate binding maps (listeners vs udpBindings) because they are independent port namespaces — the same address key can exist in both - - findBoundUdp() matches sockets in 'bound' state (not 'listening') since UDP doesn't have a listen step - - UDP sendTo to unbound port returns data.length (not an error) — silent drop is correct UDP semantics - - Message boundary preservation: each sendTo = one datagramQueue entry; recvFrom pops one entry and truncates excess beyond maxBytes (unlike TCP which does partial chunk reads) - - External UDP pattern: bind() locally, then bindExternalUdp() creates the host UDP socket and starts a recv pump (startUdpRecvPump) — sendTo checks for hostUdpSocket before routing externally - - MockHostUdpSocket with pushDatagram() controls the recv pump in tests; use setTimeout(r, 10) to allow pump microtasks to run ---- - -## 2026-03-23 - US-015 -- Implemented Unix domain sockets (AF_UNIX) with VFS integration in SocketTable -- bind() with UnixAddr creates a socket file in VFS (S_IFSOCK mode), connect() checks VFS path exists -- SOCK_STREAM: full data exchange, half-close, EOF propagation — reuses existing loopback data paths -- SOCK_DGRAM: message boundary preservation via sendTo/recvFrom, silent drop for unbound paths -- Always in-kernel routing — no host adapter involvement for Unix sockets -- EADDRINUSE when path exists in VFS (including regular files, not just socket entries) -- ECONNREFUSED when socket file removed from VFS (even if listeners map still has entry) -- Modified InMemoryFileSystem.chmod() to support explicit file type bits (S_IFSOCK | perms) -- bind() is now async (Promise) — all existing test files updated with await -- Files changed: packages/core/src/kernel/socket-table.ts (VFS option, async bind, createSocketFile, connect VFS check, S_IFSOCK constant), packages/core/src/shared/in-memory-fs.ts (S_IFSOCK, chmod type bits), packages/core/src/kernel/index.ts (S_IFSOCK export), packages/core/test/kernel/unix-socket.test.ts (new, 14 tests), 8 existing test files (async bind migration) -- **Learnings for future iterations:** - - bind() is now async like connect() and listen() — all callers must use await; sync throw tests must use .rejects.toThrow() - - InMemoryFileSystem.chmod() supports caller-provided type bits: if mode & 0o170000 is non-zero, the type bits are used directly; otherwise existing behavior preserved - - VFS is optional for SocketTable — Unix sockets still work via listeners map alone; VFS adds socket file creation and path existence checks - - Unix domain sockets share the listeners map with TCP for SOCK_STREAM, and udpBindings map for SOCK_DGRAM — addrKey() uses the path string as the key - - connect() for Unix addresses checks VFS existence before listeners map — this means removing the socket file (vfs.removeFile) causes ECONNREFUSED even if the listener entry still exists ---- - -## 2026-03-23 - US-016 -- Exposed SocketTable as a public property on KernelImpl -- KernelImpl constructor creates SocketTable with VFS reference -- onProcessExit hook calls socketTable.closeAllForProcess(pid) to clean up sockets on process exit -- dispose() calls socketTable.disposeAll() before driver teardown -- Added 5 integration tests: expose check, create/close, dispose cleanup, process exit cleanup, loopback TCP -- Files changed: packages/core/src/kernel/types.ts (socketTable on Kernel interface), packages/core/src/kernel/kernel.ts (SocketTable import, property, constructor init, onProcessExit hook, dispose), packages/core/test/kernel/kernel-integration.test.ts (5 new tests) -- **Learnings for future iterations:** - - SocketTable.get() returns null (not undefined) for missing sockets — use toBeNull() in assertions - - Process exit cleanup chain: ProcessTable.markExited → onProcessExit callback → cleanupProcessFDs + socketTable.closeAllForProcess - - SocketTable constructor accepts { vfs } option — pass kernel's VFS for Unix domain socket file management - - dispose() order matters: terminateAll() first (triggers onProcessExit for each process), then disposeAll() for any remaining sockets, then driver teardown ---- - -## 2026-03-23 - US-017 -- Implemented TimerTable with per-process ownership, budget enforcement, and cross-process isolation -- KernelTimer struct: id, pid, delayMs, repeat, hostHandle, callback, cleared flag -- TimerTable: createTimer, clearTimer, get, getActiveTimers, countForProcess, setLimit, clearAllForProcess, disposeAll -- Budget enforcement: configurable defaultMaxTimers + per-process overrides via setLimit(); throws EAGAIN when exceeded -- Cross-process isolation: clearTimer with pid param rejects if caller doesn't own the timer (EACCES) -- Host scheduling delegation: hostHandle field on KernelTimer for callers to store setTimeout/setInterval handle -- Files changed: packages/core/src/kernel/timer-table.ts (new), packages/core/src/kernel/index.ts (exports), packages/core/test/kernel/timer-table.test.ts (new, 23 tests) -- **Learnings for future iterations:** - - TimerTable follows the same Map + nextId pattern as InodeTable and SocketTable - - Budget enforcement is inline in createTimer() — no separate enforceLimit() method needed; constructor option + setLimit() per-process override - - clearTimer without pid param allows unconditional clear (for kernel-internal cleanup); with pid enables cross-process isolation - - hostHandle is mutable on KernelTimer — callers set it after createTimer() returns, before the timer fires - - cleared flag lets callers check if a timer was cancelled (e.g., to skip callback invocation in the host scheduling loop) ---- - -## 2026-03-23 - US-018 -- Extended ProcessEntry with activeHandles (Map) and handleLimit (number, 0=unlimited) -- Added registerHandle(pid, id, description), unregisterHandle(pid, id), setHandleLimit(pid, limit), getHandles(pid) methods to ProcessTable -- Budget enforcement: registerHandle throws EAGAIN when activeHandles.size >= handleLimit (if limit > 0) -- Process exit cleanup: markExited() clears activeHandles before onProcessExit callback -- getHandles() returns a defensive copy to prevent external mutation of kernel state -- Files changed: packages/core/src/kernel/types.ts (ProcessEntry fields), packages/core/src/kernel/process-table.ts (handle methods + cleanup), packages/core/test/kernel/process-table.test.ts (13 new tests, 41 total) -- **Learnings for future iterations:** - - Handle tracking is simpler than TimerTable — no separate class needed, just Map fields on ProcessEntry + methods on ProcessTable - - EBADF is the right error for unknown handle IDs (not ENOENT) — consistent with FD error conventions - - Handle cleanup in markExited() must happen before onProcessExit callback to ensure consistent state for downstream cleanup hooks - - kernel-integration.test.ts has 2 pre-existing flaky stdin streaming failures unrelated to handle work ---- - -## 2026-03-23 - US-019 -- Implemented DnsCache class in packages/core/src/kernel/dns-cache.ts -- lookup(hostname, rrtype) returns cached DnsResult or null; expired entries return null and are lazily removed -- store(hostname, rrtype, result, ttlMs?) caches with TTL; uses configurable defaultTtlMs (30s) if not specified -- flush() clears all entries; size getter for entry count -- Cache key is "hostname:rrtype" composite string — distinguishes A vs AAAA for same hostname -- Files changed: packages/core/src/kernel/dns-cache.ts (new), packages/core/src/kernel/index.ts (exports), packages/core/test/kernel/dns-cache.test.ts (new, 16 tests) -- **Learnings for future iterations:** - - DnsCache is simpler than other kernel tables — no per-process ownership, no KernelError throws, just a TTL Map - - DnsResult type is imported from host-adapter.ts (address: string, family: 4|6) - - Lazy expiry: expired entries are removed on lookup, not by a background timer — keeps implementation simple - - vi.useFakeTimers()/vi.advanceTimersByTime() is the pattern for testing time-dependent behavior in vitest - - DnsCacheOptions follows the same constructor options pattern as TimerTableOptions ---- - -## 2026-03-23 - US-020 -- Implemented full POSIX sigaction/sigprocmask semantics in ProcessTable -- SignalHandler type: handler ('default' | 'ignore' | function), mask (sa_mask), flags (SA_RESTART, SA_NOCLDSTOP) -- ProcessSignalState on ProcessEntry: handlers Map, blockedSignals Set, pendingSignals Set -- sigaction(pid, signal, handler): registers handler, returns previous, rejects SIGKILL/SIGSTOP -- sigprocmask(pid, how, set): SIG_BLOCK/SIG_UNBLOCK/SIG_SETMASK, filters SIGKILL/SIGSTOP, delivers pending on unblock -- deliverSignal refactored: checks blocked → queue, checks handler → dispatch, default action for unregistered -- SIGCONT always resumes (POSIX) even when caught or blocked; handler invoked after resume -- SIGCHLD default action is now "ignore" (correct POSIX) — updated existing test to use registered handler -- Standard signals (1-31) coalesce via Set — only one pending per signal number -- Pending signals delivered in ascending signal number order -- sa_mask temporarily blocked during handler execution, restored after -- SIGALRM delivery now routes through handler system -- EINTR added to KernelErrorCode for future SA_RESTART integration -- Files changed: packages/core/src/kernel/types.ts (SignalHandler, ProcessSignalState, SA_RESTART, SA_NOCLDSTOP, SIG_BLOCK/UNBLOCK/SETMASK, EINTR, signalState on ProcessEntry), packages/core/src/kernel/process-table.ts (sigaction, sigprocmask, getSignalState, deliverSignal/dispatchSignal/applyDefaultAction/deliverPendingSignals refactor), packages/core/src/kernel/index.ts (new exports), packages/core/test/kernel/signal-handlers.test.ts (new, 28 tests), packages/core/test/kernel/process-table.test.ts (updated SIGCHLD test) -- **Learnings for future iterations:** - - SIGCONT is special: resume always happens regardless of handler/blocking — then handler is dispatched; other signals can be purely handler-overridden - - SIGCHLD default action is "ignore" per POSIX — tests expecting driverProcess.kill(SIGCHLD) need a registered handler - - Recursive deliverPendingSignals can cause double-dispatch — check pendingSignals.has(sig) before dispatching from snapshot array - - deliverSignal → dispatchSignal → applyDefaultAction three-level dispatch keeps POSIX semantics clean - - ProcessEntry.signalState is initialized in register() — no separate initialization step needed ---- - -## 2026-03-23 - US-021 -- Implemented concrete Node.js HostNetworkAdapter in packages/nodejs/src/host-network-adapter.ts -- NodeHostSocket: wraps net.Socket with queued-read model (data/EOF buffered, each read() returns next chunk or null) -- NodeHostListener: wraps net.Server with connection queue; accept() returns next HostSocket -- NodeHostUdpSocket: wraps dgram.Socket with message queue; recv() returns next datagram -- createNodeHostNetworkAdapter() factory: tcpConnect (net.connect), tcpListen (net.createServer), udpBind (dgram.createSocket), udpSend (dgram.send), dnsLookup (dns.lookup) -- Added HostNetworkAdapter/HostSocket/HostListener/HostUdpSocket/DnsResult type exports to @secure-exec/core main index.ts -- Exported createNodeHostNetworkAdapter from packages/nodejs/src/index.ts -- Files changed: packages/nodejs/src/host-network-adapter.ts (new), packages/nodejs/src/index.ts (export), packages/core/src/index.ts (type exports) -- **Learnings for future iterations:** - - Host adapter types were only in kernel/index.ts, not the core main index — had to add type exports to packages/core/src/index.ts - - After editing core exports, must rebuild core (`pnpm turbo run build --filter=@secure-exec/core`) before nodejs typecheck can see the new types - - The queued-read pattern (readQueue + waiters array) is reusable for any pull-based async reader wrapping push-based Node streams - - udpSend needs access to the underlying dgram.Socket — uses casting through the wrapper since HostUdpSocket interface is opaque - - HostSocket.setOption is a simple pass-through; real option-to-setsockopt mapping will be needed when wired into the kernel ---- - -## 2026-03-23 - US-022 -- Migrated Node.js FD table from in-isolate Map to host-side kernel ProcessFDTable -- Added 8 new bridge handler keys (fdOpen, fdClose, fdRead, fdWrite, fdFstat, fdFtruncate, fdFsync, fdGetPath) to bridge-contract.ts -- Added buildKernelFdBridgeHandlers() in bridge-handlers.ts — creates FDTableManager + ProcessFDTable per execution, delegates I/O to VFS -- Wired FD handlers into execution-driver.ts dispatch handlers (routed through _loadPolyfill bridge dispatch) -- Replaced all fdTable.get/set/has/delete in bridge/fs.ts with bridge calls to kernel FD handlers -- Removed fdTable Map, nextFd counter, MAX_BRIDGE_FDS, canRead(), canWrite() from bridge/fs.ts -- readSync/writeSync now use base64 encoding for binary data transfer across the bridge boundary -- Files changed: packages/nodejs/src/bridge-contract.ts (8 new keys), packages/nodejs/src/bridge-handlers.ts (buildKernelFdBridgeHandlers), packages/nodejs/src/execution-driver.ts (wiring + cleanup), packages/nodejs/src/bridge/fs.ts (fdTable removal, bridge call migration) -- **Learnings for future iterations:** - - Bridge globals not in the Rust V8 SYNC_BRIDGE_FNS are automatically dispatched through _loadPolyfill via BRIDGE_DISPATCH_SHIM — no Rust code changes needed for new bridge functions - - The dispatch shim JSON-serializes args and results, so binary data must be base64-encoded - - After modifying bridge source (bridge/fs.ts), the bridge IIFE must be rebuilt via `pnpm turbo run build --filter=@secure-exec/nodejs` for changes to take effect in tests - - FD operations (open/close/read/write/fstat) now go through the bridge dispatch; error messages must contain "EBADF"/"ENOENT" substrings for the in-isolate error wrapping to produce correct fs error codes - - ProcessFDTable from @secure-exec/core handles FD allocation, cursor tracking, and reference counting — bridge handlers don't need to implement these manually - - resource-budgets.test.ts has 7 pre-existing flaky failures unrelated to FD migration - - runtime.test.ts has 2 pre-existing PTY/setRawMode failures unrelated to FD migration ---- - -## 2026-03-23 - US-023 -- Migrated Node.js net.connect to route through kernel socket table instead of direct host TCP -- buildNetworkSocketBridgeHandlers now accepts optional socketTable + pid; when provided, uses kernel socket routing -- Kernel path: create kernel socket (sync, returns ID) → async connect → read pump dispatches data/end/close events -- Read pump uses socket.readWaiters.enqueue().wait() to block until data arrives, then dispatches via bridge events -- Fallback path preserved: when socketTable is not provided, original direct net.Socket behavior is used (backward compat) -- Added hostNetworkAdapter to KernelOptions and wired to SocketTable constructor for external connection routing -- Added socketTable to KernelInterface, exposed from createKernelInterface() in kernel.ts -- Added socketTable/pid to NodeExecutionDriverOptions, passed through execution-driver to bridge handlers -- kernel-runtime.ts passes kernel.socketTable and ctx.pid to NodeExecutionDriver -- Removed unused netSockets Map, nextNetSocketId, and netSocket* methods from createDefaultNetworkAdapter (driver.ts) -- Removed netSocket* methods from NetworkAdapter interface (core/types.ts) and permission wrappers (permissions.ts) -- Removed unused tls import from driver.ts -- Exported SocketTable, AF_INET, AF_INET6, AF_UNIX, SOCK_STREAM, SOCK_DGRAM from @secure-exec/core index -- TLS upgrade for external kernel sockets: accesses underlying net.Socket from NodeHostSocket for tls.connect wrapping -- Files changed: packages/core/src/kernel/types.ts (hostNetworkAdapter on KernelOptions, socketTable on KernelInterface), packages/core/src/kernel/kernel.ts (wire hostAdapter, expose socketTable on KernelInterface), packages/core/src/index.ts (SocketTable + constant exports), packages/core/src/types.ts (removed netSocket* from NetworkAdapter), packages/core/src/shared/permissions.ts (removed netSocket* wrappers), packages/nodejs/src/bridge-handlers.ts (kernel socket routing + fallback), packages/nodejs/src/execution-driver.ts (socketTable/pid passthrough), packages/nodejs/src/isolate-bootstrap.ts (socketTable/pid on options), packages/nodejs/src/kernel-runtime.ts (wire socketTable/pid), packages/nodejs/src/driver.ts (removed netSockets + tls import) -- **Learnings for future iterations:** - - SocketTable.close() requires both socketId AND pid — per-process isolation check - - The kernel's connect() is async but bridge handlers are sync — return socketId immediately, dispatch events async (matches existing bridge pattern) - - The read pump waits on socket.readWaiters (WaitQueue) for data — no polling needed - - External kernel sockets have hostSocket (NodeHostSocket) wrapping real net.Socket — TLS upgrade accesses the inner socket via casting - - NetworkAdapter.netSocket* methods were dead code — never called by any consumer; bridge handlers are the actual path - - When adding exports to @secure-exec/core index.ts, must rebuild core before downstream packages can see them ---- - -## 2026-03-23 - US-024 -- Migrated Node.js http.createServer to route through kernel socket table instead of adapter.httpServerListen -- When socketTable + pid available, bridge handler creates kernel socket → bind → listen (external: true) -- Kernel creates real TCP listener via hostAdapter.tcpListen(), accept pump feeds connections to local http.Server -- Created KernelSocketDuplex class (stream.Duplex) to bridge kernel sockets to Node http module for HTTP parsing -- Accept loop dequeues connections from kernel listener backlog and feeds them to http.Server via emit('connection') -- HTTP protocol parsing stays on host side (in Node http module) — kernel handles TCP, bridge handles HTTP -- For loopback: sandbox connect() pairs kernel sockets directly, no real TCP involved -- For external: hostAdapter.tcpListen creates real net.Server, kernel accept pump creates kernel sockets for incoming connections -- Added trackOwnedPort/untrackOwnedPort to NetworkAdapter interface for SSRF loopback exemption coordination -- Removed serverRequestListeners Map from bridge/network.ts — request listener stored directly on Server instance -- Changed buildNetworkBridgeHandlers to return NetworkBridgeResult { handlers, dispose } for kernel HTTP server cleanup -- Fallback adapter path preserved: when socketTable not provided, existing adapter.httpServerListen behavior is used -- Files changed: packages/core/src/types.ts (trackOwnedPort/untrackOwnedPort on NetworkAdapter), packages/nodejs/src/bridge-handlers.ts (kernel HTTP server path, KernelSocketDuplex, accept loop, NetworkBridgeResult), packages/nodejs/src/execution-driver.ts (socketTable/pid passthrough to network bridge, dispose on cleanup), packages/nodejs/src/driver.ts (trackOwnedPort/untrackOwnedPort impl), packages/nodejs/src/bridge/network.ts (serverRequestListeners removal, _requestListener on Server instance) -- **Learnings for future iterations:** - - http.Server + server.emit('connection', duplexStream) feeds kernel socket data through Node's HTTP parser without real TCP - - KernelSocketDuplex needs socket-like properties (remoteAddress, remotePort, setNoDelay, setKeepAlive, setTimeout) for Node http module compatibility - - The kernel's listen() with { external: true } starts an internal accept pump — bridge handler's accept loop calls socketTable.accept() to dequeue connections - - buildNetworkBridgeHandlers now returns { handlers, dispose } — dispose closes all kernel HTTP servers on execution cleanup - - trackOwnedPort/untrackOwnedPort coordinates SSRF exemption between kernel HTTP servers and adapter fetch/httpRequest until US-025 migrates SSRF fully to kernel - - servers Map and ownedServerPorts Set in driver.ts remain for adapter fallback path — full removal deferred to US-025 ---- - -## 2026-03-23 - US-025 -- Migrated SSRF validation from driver.ts NetworkAdapter to bridge-handlers.ts with kernel socket table awareness -- Added assertNotPrivateHost, isPrivateIp, isLoopbackHost functions to bridge-handlers.ts -- Bridge handler checks SSRF before calling adapter.fetch() and adapter.httpRequest() -- Kernel-aware loopback exemption: assertNotPrivateHost uses socketTable.findListener() to check if a port has a kernel listener -- Adapter retains defense-in-depth SSRF checks (assertNotPrivateHost in redirect loop and httpRequest) for non-bridge callers -- Removed trackOwnedPort/untrackOwnedPort from NetworkAdapter interface and driver.ts (kernel listener check replaces ownedServerPorts for loopback exemption) -- Removed adapter.trackOwnedPort/untrackOwnedPort calls from kernel HTTP server path in bridge-handlers.ts -- Files changed: packages/core/src/types.ts (removed trackOwnedPort/untrackOwnedPort from NetworkAdapter), packages/nodejs/src/bridge-handlers.ts (SSRF functions + fetch/httpRequest SSRF checks), packages/nodejs/src/driver.ts (adapter SSRF comments updated, trackOwnedPort removed) -- **Learnings for future iterations:** - - socketTable.findListener({ host: '127.0.0.1', port }) returns the listening kernel socket or null — use for loopback port ownership check - - Defense-in-depth: adapter keeps basic SSRF for redirect validation; bridge handler adds kernel-aware primary check - - When testing SSRF changes, ALWAYS rebuild the bridge IIFE (pnpm turbo run build --filter=@secure-exec/nodejs --force) — stale bridge code causes misleading test failures - - ownedServerPorts Set remains in driver.ts for the adapter fallback path (httpServerListen) but kernel path uses socketTable.findListener() exclusively ---- - -## 2026-03-23 - US-026 -- Migrated Node.js child process registry to kernel process table -- On spawn: allocates PID from processTable.allocatePid(), registers with processTable.register() -- On exit: calls processTable.markExited(pid, code) for kernel-level process lifecycle tracking -- On kill: routes through processTable.kill(pid, signal) instead of direct SpawnedProcess.kill -- Created wrapAsDriverProcess() to adapt SpawnedProcess to kernel DriverProcess interface (adds onStdout/onStderr/onExit stubs) -- Removed activeChildren Map from bridge/child-process.ts — replaced with childProcessInstances (event routing only, not process state) -- Process state (running/exited) now tracked by kernel process table; sandbox-side Map only dispatches stream events -- Exposed processTable on KernelInterface (types.ts) and KernelImpl (kernel.ts) -- Added processTable to NodeExecutionDriverOptions, wired through execution-driver.ts and kernel-runtime.ts -- spawnSync also registers with kernel process table and marks exited on completion -- Files changed: packages/core/src/kernel/types.ts (processTable on KernelInterface), packages/core/src/kernel/kernel.ts (expose processTable), packages/nodejs/src/bridge-handlers.ts (kernel registration in spawn/exit/kill, wrapAsDriverProcess), packages/nodejs/src/execution-driver.ts (processTable passthrough), packages/nodejs/src/isolate-bootstrap.ts (processTable option), packages/nodejs/src/kernel-runtime.ts (wire processTable), packages/nodejs/src/bridge/child-process.ts (activeChildren → childProcessInstances) -- **Learnings for future iterations:** - - DriverProcess has onStdout/onStderr/onExit callback properties that SpawnedProcess lacks — wrap with null stubs when adapting - - ProcessTable.register() requires ProcessContext with env/cwd/fds — env must not be undefined (use ?? {}) - - processTable is private on KernelImpl but exposed on KernelInterface — drivers access via kernel interface object - - sessionToPid Map bridges between bridge handler's sessionId (internal counter) and kernel PID - - Fallback path preserved: when processTable not provided, original non-kernel behavior unchanged ---- - -## 2026-03-24 - US-027 -- Routed WasmVM TCP socket operations through kernel SocketTable instead of driver-private _sockets Map -- Removed _sockets Map and _nextSocketId counter from driver.ts -- netSocket → kernel.socketTable.create(domain, type, protocol, pid) -- netConnect → await kernel.socketTable.connect(socketId, { host, port }) — hostAdapter handles real TCP -- netSend → kernel.socketTable.send(socketId, data, flags) — TLS-upgraded sockets write directly -- netRecv → kernel.socketTable.recv() with readWaiters wait for blocking reads on external sockets -- netClose → kernel.socketTable.close(socketId, pid) + TLS socket cleanup -- netPoll → kernel.socketTable.poll() for socket readability, kernel.fdPoll for pipes -- netTlsConnect → accesses hostSocket's underlying net.Socket for TLS upgrade, stores in _tlsSockets -- kernel-worker.ts: localToKernelFd.set(kernelSocketId, kernelSocketId) on net_socket, delete on net_close -- Test updated: createMockKernel() provides SocketTable + real HostNetworkAdapter (TestHostSocket wrapping node:net) -- Files changed: packages/wasmvm/src/driver.ts (socket handler migration, _sockets→kernel.socketTable), packages/wasmvm/src/kernel-worker.ts (localToKernelFd mapping for socket FDs), packages/wasmvm/test/net-socket.test.ts (mock kernel + scoped call helpers) -- **Learnings for future iterations:** - - Kernel recv() returns null for both "no data yet" and "EOF" — distinguish by checking socket.external + peerWriteClosed for external, peerId existence for loopback - - WaitHandle timeout goes in WaitQueue.enqueue(timeoutMs), not WaitHandle.wait() — wait() takes no args - - TLS upgrade accesses NodeHostSocket's private socket field via (hostSocket as any).socket — set hostSocket=undefined to detach kernel read pump - - SocketTable.close() requires both socketId AND pid for per-process ownership check - - Test kernel mock only needs socketTable + fdPoll — other kernel methods not needed for socket tests - - Kernel socket IDs are used directly as WASM FDs — identity mapping in localToKernelFd for poll consistency ---- - -## 2026-03-24 - US-028 -- Implemented bind/listen/accept WASI extensions for WasmVM server sockets -- Added net_bind, net_listen, net_accept extern declarations and safe Rust wrappers to native/wasmvm/crates/wasi-ext/src/lib.rs -- Added net_bind, net_listen, net_accept import handlers to packages/wasmvm/src/kernel-worker.ts -- Added netBind, netListen, netAccept RPC handler cases to packages/wasmvm/src/driver.ts -- Added EAGAIN and EADDRINUSE errno codes to packages/wasmvm/src/wasi-constants.ts -- **Learnings for future iterations:** - - WASI errno codes for EAGAIN=6 and EADDRINUSE=3 were missing from wasi-constants.ts — when adding new socket operations, check that all possible KernelError codes have WASI errno mappings - - accept() handler needs to wait on acceptWaiters when backlog is empty, with 30s timeout matching recv() pattern - - Address serialization for bind uses same "host:port" format as connect; unix sockets use bare path (no colon) - - net_accept returns new FD via intResult and remote address string via data buffer — same dual-channel pattern used by getaddrinfo - - Rust vendor directory is fetched at build time (make wasm), cargo check won't work without it ---- - -## 2026-03-24 - US-029 -- Extended 0008-sockets.patch with bind(), listen(), accept() C implementations in host_socket.c -- Added WASM import declarations: __host_net_bind, __host_net_listen, __host_net_accept -- bind() follows same sockaddr-to-string pattern as connect() (AF_INET/AF_INET6 → "host:port") -- listen() is a simple passthrough with backlog clamped to non-negative -- accept() calls __host_net_accept, parses returned "host:port" string back into sockaddr_in/sockaddr_in6 -- Un-gated bind() and listen() declarations in sys/socket.h (removed #if wasilibc_unmodified_upstream guard) -- accept()/accept4() were already un-gated in wasi-libc at pinned commit 574b88da -- Files changed: native/wasmvm/patches/wasi-libc/0008-sockets.patch -- **Learnings for future iterations:** - - accept/accept4 declarations are NOT behind the wasilibc_unmodified_upstream guard in the pinned wasi-libc commit (574b88da) — only bind/listen/connect/socket need un-gating - - Address string format from host is "host:port" — use strrchr for last colon to handle IPv6 addresses - - The build script (patch-wasi-libc.sh) removes conflicting .o files from libc.a — bind/listen/accept don't need removal since they have no wasip1 stubs - - Patch hunk line counts must be updated when adding/removing lines — @@ header second pair is the new file line range ---- - -## 2026-03-24 - US-030 -- Added net_sendto and net_recvfrom WASI extensions for WasmVM UDP -- Rust: added extern declarations and safe wrappers in native/wasmvm/crates/wasi-ext/src/lib.rs - - net_sendto(fd, buf_ptr, buf_len, flags, addr_ptr, addr_len, ret_sent) -> errno - - net_recvfrom(fd, buf_ptr, buf_len, flags, ret_received, ret_addr, ret_addr_len) -> errno - - sendto() wrapper: takes fd, buf, flags, addr → Result - - recvfrom() wrapper: takes fd, buf, flags, addr_buf → Result<(u32, u32), Errno> -- kernel-worker.ts: net_sendto handler reads data + addr from WASM memory, dispatches to netSendTo RPC -- kernel-worker.ts: net_recvfrom handler dispatches to netRecvFrom RPC, unpacks [data|addr] from combined buffer -- driver.ts: netSendTo parses "host:port" addr, calls kernel.socketTable.sendTo() -- driver.ts: netRecvFrom waits for datagram (30s timeout), packs [data|addr] into combined response buffer with intResult = data length -- Files changed: native/wasmvm/crates/wasi-ext/src/lib.rs, packages/wasmvm/src/kernel-worker.ts, packages/wasmvm/src/driver.ts -- **Learnings for future iterations:** - - RPC response only has { errno, intResult, data } — no string field; for multi-value returns, pack into data buffer and use intResult as split offset - - The responseData → SIG_IDX_DATA_LEN path overwrites manual Atomics.store calls — always use responseData = combined for correct data length signaling - - sendTo/recvFrom already exist on SocketTable (packages/core/src/kernel/socket-table.ts) — only WASI host import and RPC plumbing needed ---- - -## 2026-03-24 - US-031 -- Added sendto() and recvfrom() C implementations to 0008-sockets.patch -- Added AF_UNIX support in address serialization via sockaddr_to_string() / string_to_sockaddr() helper functions -- sockaddr_to_string: AF_INET/AF_INET6 → "host:port", AF_UNIX → path string -- string_to_sockaddr: "host:port" → sockaddr_in/sockaddr_in6, no colon → sockaddr_un -- sendto() calls __host_net_sendto with serialized addr; falls back to send() when dest_addr is NULL -- recvfrom() calls __host_net_recvfrom, parses returned addr via string_to_sockaddr; falls back to recv() when src_addr is NULL -- Refactored connect(), bind(), accept() to use the shared helper functions (removed duplicated address serialization code) -- Added sockaddr_un definition with __has_include guard (WASI libc doesn't provide sys/un.h) -- Updated WASM import declarations to include net_sendto and net_recvfrom (matching lib.rs signatures) -- Updated patch hunk line count from 518 to 628 -- Files changed: native/wasmvm/patches/wasi-libc/0008-sockets.patch -- **Learnings for future iterations:** - - WASI libc doesn't include sys/un.h or define AF_UNIX — must define sockaddr_un inline with __has_include guard - - Address convention: inet addresses as "host:port", unix as bare path (no colon) — driver uses lastIndexOf(':') to distinguish - - The driver's netConnect handler doesn't support unix paths yet (returns EINVAL) — only netBind handles both; this is a known gap for future stories - - __builtin_offsetof works in clang for computing sun_path offset in sockaddr_un - - Patch line counts in @@ headers must be updated manually when adding lines to a /dev/null → new file diff ---- - -## 2026-03-24 - US-032 -- Added tcp_server.c C test program: socket() → bind(port) → listen() → accept() → recv() → send("pong") → close() -- Added tcp_server to PATCHED_PROGRAMS in native/wasmvm/c/Makefile -- Added packages/wasmvm/test/net-server.test.ts: integration test that spawns tcp_server WASM, connects via kernel socketTable loopback, sends "ping", receives "pong", verifies stdout output -- Files changed: native/wasmvm/c/programs/tcp_server.c (new), native/wasmvm/c/Makefile (PATCHED_PROGRAMS), packages/wasmvm/test/net-server.test.ts (new) -- **Learnings for future iterations:** - - For WASM server tests, start kernel.exec() without awaiting, poll findListener() for readiness, then connect via socketTable loopback - - Client sockets in test use a fake PID (e.g., 999) — socketTable.create doesn't validate pid against process table - - Loopback connect() is synchronous inside the async function — no host adapter needed for kernel-to-kernel routing - - recv() may return null when WASM worker hasn't processed yet — poll with setTimeout to yield to event loop between retries - - tcp_server prints "listening on port N" after listen() and fflush(stdout) — useful for verifying server readiness in test output ---- - -## 2026-03-24 - US-033 -- Added udp_echo.c C test program: socket(SOCK_DGRAM) → bind(port) → recvfrom() → sendto() (echo) → close() -- Added udp_echo to PATCHED_PROGRAMS in native/wasmvm/c/Makefile -- Added packages/wasmvm/test/net-udp.test.ts: integration test that spawns udp_echo WASM, sends datagram via kernel socketTable, verifies echo response and message boundary preservation -- Made findBoundUdp() public on SocketTable (was private) — mirrors findListener() for TCP, needed by test to poll for UDP binding readiness -- Files changed: native/wasmvm/c/programs/udp_echo.c (new), native/wasmvm/c/Makefile (PATCHED_PROGRAMS), packages/wasmvm/test/net-udp.test.ts (new), packages/core/src/kernel/socket-table.ts (findBoundUdp visibility) -- **Learnings for future iterations:** - - findBoundUdp was private on SocketTable — needed to make it public for test polling (mirrors findListener for TCP) - - UDP server tests poll waitForUdpBinding() instead of waitForListener() — separate binding map from TCP listeners - - UDP client sockets need bind() to ephemeral port (port 0) before sendTo — otherwise the kernel has no source address for the reply - - The 0008-sockets.patch has a context drift issue (hunk #2 fails without --fuzz=3) — pre-existing issue, not caused by this story - - C programs compile natively with `cc -O0 -g -I include/ -o udp_echo programs/udp_echo.c` for quick verification ---- - -## 2026-03-24 - US-034 -- Implemented WasmVM Unix domain socket C test program and integration test -- Created native/wasmvm/c/programs/unix_socket.c: AF_UNIX server (socket → bind → listen → accept → recv → send "pong") -- Added unix_socket to PATCHED_PROGRAMS in Makefile -- Fixed packages/wasmvm/src/driver.ts netConnect handler to support Unix domain socket paths (no colon = Unix path, matching netBind pattern) -- Created packages/wasmvm/test/net-unix.test.ts: spawns unix_socket WASM, connects from kernel, verifies data exchange -- Files changed: native/wasmvm/c/programs/unix_socket.c (new), native/wasmvm/c/Makefile, packages/wasmvm/src/driver.ts, packages/wasmvm/test/net-unix.test.ts (new) -- **Learnings for future iterations:** - - netConnect in driver.ts was missing Unix domain socket path support — netBind had it but netConnect returned EINVAL for pathless addresses - - Unix socket C programs need fallback sockaddr_un definition since sys/un.h may not be available in WASI — the 0008-sockets.patch provides its own but __has_include guard is needed - - waitForUnixListener uses findListener({ path }) instead of findListener({ host, port }) — same method, different address type - - SimpleVFS needs /tmp directory created in beforeEach for unix socket files to be created by the kernel ---- - -## 2026-03-24 - US-035 -- Implemented WasmVM cooperative signal handler support: WASI extension, kernel integration, C sysroot patch, test program, integration test -- Added proc_sigaction to host_process module in native/wasmvm/crates/wasi-ext/src/lib.rs (signal, action) -> errno -- Extended SAB protocol with SIG_IDX_PENDING_SIGNAL slot in packages/wasmvm/src/syscall-rpc.ts for cooperative delivery -- Added sigaction RPC dispatch in packages/wasmvm/src/driver.ts — registers handler in kernel process table, piggybacking pending signals in RPC responses -- Added _wasmPendingSignals Map for per-PID signal queuing in driver -- Added proc_sigaction host import handler in packages/wasmvm/src/kernel-worker.ts -- Added cooperative signal delivery: after each rpcCall, check SIG_IDX_PENDING_SIGNAL and invoke wasmTrampoline -- Added wasmTrampoline wiring after WASM instantiation (reads __wasi_signal_trampoline export) -- Created 0011-sigaction.patch: signal() implementation + __wasi_signal_trampoline export in C sysroot -- Created native/wasmvm/c/programs/signal_handler.c: registers SIGINT handler, busy-loops with usleep, prints caught signal -- Added signal_handler to PATCHED_PROGRAMS in Makefile -- Created packages/wasmvm/test/signal-handler.test.ts: spawns signal_handler WASM, delivers SIGINT via ManagedProcess.kill(), verifies handler fires -- Files changed: native/wasmvm/crates/wasi-ext/src/lib.rs, packages/wasmvm/src/syscall-rpc.ts, packages/wasmvm/src/driver.ts, packages/wasmvm/src/kernel-worker.ts, native/wasmvm/patches/wasi-libc/0011-sigaction.patch (new), native/wasmvm/c/programs/signal_handler.c (new), native/wasmvm/c/Makefile, packages/wasmvm/test/signal-handler.test.ts (new) -- **Learnings for future iterations:** - - Kernel public Kernel interface has no kill(pid, signal) — use ManagedProcess.kill() from spawn() for tests, or kernel.processTable.kill() internally - - SignalDisposition type is exported from @secure-exec/core kernel index but NOT from the main package entry point — use inline type or import from kernel path - - Cooperative signal delivery architecture: handler registered in kernel is a JS callback that queues to _wasmPendingSignals; driver piggybacking delivers one signal per RPC response in SIG_IDX_PENDING_SIGNAL; worker reads it and calls wasmTrampoline - - C sysroot signal handling: signal() stores handler in static table + calls proc_sigaction WASM import; __wasi_signal_trampoline dispatches to stored handler - - Signals only delivered at syscall boundaries (fundamental WASM limitation) — long compute loops without syscalls won't see signals - - Pre-existing test failures in fd-table.test.ts, wasi-polyfill.test.ts, net-socket.test.ts, resource-exhaustion.test.ts — not related to this work ---- - -## 2026-03-24 - US-036 -- Implemented cross-runtime network integration test in packages/secure-exec/tests/kernel/cross-runtime-network.test.ts -- Three tests: (1) WasmVM tcp_server ↔ Node.js net.connect data exchange, (2) Node.js http.createServer ↔ WasmVM http_get HTTP exchange, (3) loopback verification via direct kernel socket table access -- Uses createKernel with both WasmVM (C_BUILD_DIR + COMMANDS_DIR) and Node.js runtimes mounted -- Skip-guarded for missing WASM binaries (tcp_server, http_get) -- Files changed: packages/secure-exec/tests/kernel/cross-runtime-network.test.ts (new) -- **Learnings for future iterations:** - - createIntegrationKernel helper only includes COMMANDS_DIR (Rust binaries); for C WASM programs, create kernel manually with commandDirs: [C_BUILD_DIR, COMMANDS_DIR] - - http_get.c is a ready-made HTTP client C program that does GET and prints body — useful for cross-runtime HTTP tests - - waitForListener() pattern: poll kernel.socketTable.findListener() in a loop for server readiness - - For long-running server processes, use kernel.spawn() with kill() cleanup; for one-shot servers (like tcp_server), use kernel.exec() which completes after one connection ---- - -## 2026-03-24 - US-037 -- Re-ran full Node.js conformance suite (3532 tests) after kernel consolidation -- Genuine pass rate improved from 11.3% (399/3532) to 19.9% (704/3532) — 305 new genuine passes -- 357 tests that were expected-fail now genuinely pass — removed their expectations -- 49 previously-passing tests now fail due to implementation gaps — added specific failure reasons -- 38 tests passing under glob-match patterns got pass overrides -- FIX-01 (HTTP server tests): 183 of 492 tests now pass (37% resolved) -- Files changed: expectations.json (restored + updated), runner.test.ts (restored), common/ shims (restored), conformance-report.json, nodejs-compat-roadmap.md, package.json (minimatch dep) -- **Learnings for future iterations:** - - The conformance runner was deleted in commit 2783baf3 — needs to be restored from git history before running - - Tests marked `expected: "fail"` that hang forever still time out and fail vitest — use `expected: "skip"` for tests that hang - - Glob patterns in expectations.json need explicit pass overrides for individual tests that now genuinely pass - - `minimatch` npm package is needed for the conformance runner (glob pattern matching) - - Full conformance suite takes ~3-5 minutes to run (3532 tests at 30s timeout each) - - Newly failing tests (regressions from expected-pass) need investigation and proper categorization ---- - -## 2026-03-24 - US-038 -- Reclassified dgram, net, tls, https, http2 conformance test expectations from `unsupported-module` to `implementation-gap` -- Re-ran all 735 tests across 5 network modules: 38 genuinely pass, 697 fail (same as before reclassification) -- Failure breakdown: 494 assertion failures (API gaps), 169 missing fixture files (TLS certs), 16 timeouts, 13 cluster-dependent, 5 other -- Updated expectations.json: glob patterns reclassified, individual pass overrides preserved -- Updated conformance-report.json with correct module-level counts -- Updated docs-internal/nodejs-compat-roadmap.md: unsupported-module 1226→735, implementation-gap 762→1366 -- Files changed: expectations.json, conformance-report.json, nodejs-compat-roadmap.md, prd.json -- **Learnings for future iterations:** - - When running conformance tests with `-t "node/"`, expected-fail tests that actually fail show as vitest PASSES — don't confuse this with the test genuinely passing - - To find genuinely passing tests, you must check the vitest JSON output for `status: "passed"` vs failure messages containing "expected to fail but passed" - - Most TLS/HTTPS conformance failures are from missing fixture files (certs, keys) not loaded into the VFS, not from actual API gaps - - dgram and net failures are mostly API assertion failures — the kernel socket table provides the transport but the bridge surface area has gaps - - http2 has the most failures (252) — mostly assertion failures in protocol handling ---- - -## 2026-03-24 - US-039 -- Completed adversarial proofing audit of kernel consolidation implementation -- Verified WasmVM driver.ts is fully migrated — no legacy _sockets or _nextSocketId -- Verified kernel path exists for http.createServer (socketTable.create → bind → listen) -- Verified kernel path exists for net.connect (socketTable.create → socketTable.connect) -- Verified host-network-adapter.ts has no SSRF validation (clean delegation) -- Verified kernel checkNetworkPermission() covers connect, listen, send, sendTo, externalListen -- Documented 4 remaining gaps as future work (legacy adapter fallback paths) -- Created docs-internal/kernel-consolidation-audit.md with full findings -- Files changed: docs-internal/kernel-consolidation-audit.md (new), prd.json, progress.txt -- **Learnings for future iterations:** - - The legacy adapter path (createDefaultNetworkAdapter in driver.ts) still has servers/ownedServerPorts/upgradeSockets Maps because createNodeRuntimeDriverFactory creates drivers without kernel routing - - Bridge-side activeNetSockets Map in bridge/network.ts is event routing only (like childProcessInstances) — it maps socket IDs to bridge NetSocket instances for dispatching host events - - SSRF validation is intentionally duplicated: bridge-handlers.ts has kernel-aware version (socketTable.findListener), driver.ts has adapter version (ownedServerPorts) — the adapter copy is defense-in-depth for the fallback path - - Removing the legacy adapter networking requires migrating NodeRuntime to use KernelNodeRuntime as its backing implementation — this is a separate workstream ---- - -## 2026-03-24 - Completion -- All user stories US-001 through US-039 now have passes: true -- Committed completion marker: c5523e80 ---- - -## 2026-03-24 17:13 PDT - US-040 -- Removed the adapter-managed HTTP server surface from `NetworkAdapter` and its permission wrapper/stub so Node runtime networking stays client-only at the adapter layer while server/listener state remains kernel-managed -- Deleted the legacy loopback HTTP server implementation from `packages/nodejs/src/default-network-adapter.ts`; kept only fetch/DNS/httpRequest plus upgrade-socket callbacks for client-side upgrade flows -- Updated runtime-driver tests to stop calling `adapter.httpServerListen/httpServerClose` directly and instead cover kernel-backed server behavior with sandbox `http.createServer()`, loopback checker usage, and `initialExemptPorts` where host-side requests need to reach a sandbox listener -- Synced docs/contracts to describe the narrower `NetworkAdapter` surface and the fact that standalone `NodeRuntime` still provisions an internal `SocketTable` for kernel-backed socket routing -- Quality checks run: - - `pnpm tsc --noEmit -p packages/core/tsconfig.json` ✅ - - `pnpm tsc --noEmit -p packages/nodejs/tsconfig.json` ✅ - - `pnpm tsc --noEmit -p packages/secure-exec/tsconfig.json` ✅ - - `pnpm vitest run packages/secure-exec/tests/test-suite/node.test.ts` ✅ - - `pnpm vitest run packages/secure-exec/tests/runtime-driver/` ❌ blocked by pre-existing unrelated failures; first concrete failure was `packages/secure-exec/tests/runtime-driver/node/hono-fetch-external.test.ts` with `Cannot read properties of null (reading 'compileScript')` -- Files changed: packages/core/src/types.ts, packages/core/src/shared/permissions.ts, packages/nodejs/src/default-network-adapter.ts, packages/secure-exec/tests/permissions.test.ts, packages/secure-exec/tests/runtime-driver/node/index.test.ts, packages/secure-exec/tests/runtime-driver/node/ssrf-protection.test.ts, packages/secure-exec/tests/runtime-driver/node/resource-budgets.test.ts, packages/secure-exec/tests/runtime-driver/node/bridge-hardening.test.ts, docs/api-reference.mdx, docs/features/networking.mdx, docs/system-drivers/node.mdx, docs-internal/arch/overview.md, .agent/contracts/node-runtime.md, progress.txt -- **Learnings for future iterations:** - - Standalone `NodeRuntime` no longer needs adapter-managed HTTP server helpers; `NodeExecutionDriver` already provisions a kernel `SocketTable` with a Node host adapter for listen/connect routing - - Keep `upgradeSocketWrite/End/Destroy` and `setUpgradeSocketCallbacks` on `NetworkAdapter` — they are still required for client-side HTTP upgrade flows even after removing adapter-managed server listeners - - Host-side tests that need to reach sandbox listeners are more reliable with fixed ports plus `initialExemptPorts` than with reintroducing owned-port bookkeeping into the adapter - - The required `packages/secure-exec/tests/runtime-driver/` command is currently red for unrelated branch issues, so US-040 should not be marked passing or committed until that suite is green ---- - -## 2026-03-24 17:22 PDT - US-040 -- Continued the US-040 cleanup already in progress and removed the now-unused `buildUpgradeSocketBridgeHandlers()` helper from `packages/nodejs/src/bridge-handlers.ts` -- Updated the bridge comment to reflect kernel-only TCP routing and added a bridge-side loopback checker that derives host-side loopback allowances from the active kernel-backed HTTP server set -- Re-ran focused verification after the bridge cleanup: - - `pnpm --filter @secure-exec/nodejs exec tsc --noEmit` ✅ - - `pnpm --filter secure-exec exec tsc --noEmit` ✅ - - `pnpm vitest run packages/nodejs/test/legacy-networking-policy.test.ts packages/secure-exec/tests/test-suite/node.test.ts packages/secure-exec/tests/runtime-driver/node/ssrf-protection.test.ts` ✅ - - `pnpm vitest run packages/secure-exec/tests/runtime-driver/node/index.test.ts -t "serves requests through bridged http.createServer and host network fetch|coerces 0.0.0.0 listen to loopback for strict sandboxing|can terminate a running sandbox HTTP server from host side|http.Agent with maxSockets=1 serializes concurrent requests"` ❌ still blocked by the broader Node runtime worktree; the sandbox HTTP server path never reaches `listen()` there, so SSRF remains blocked as a downstream symptom -- Files changed: packages/nodejs/src/bridge-handlers.ts, scripts/ralph/progress.txt -- **Learnings for future iterations:** - - The source-level policy test in `packages/nodejs/test/legacy-networking-policy.test.ts` is a good guardrail for this story; keep it when refactoring bridge/driver networking internals - - A passing SSRF adapter test does not prove host-side `runtime.network.fetch()` can reach sandbox listeners; that path also depends on the broader Node runtime successfully constructing the bridged HTTP server - - When the host-side sandbox HTTP server tests fail with SSRF, verify that the sandbox server actually reached `listen()` before assuming the loopback checker is the primary bug ---- - -## 2026-03-24 19:16 PDT - US-040 -- Finished the kernel-only HTTP bridge path by wiring `_networkHttpServerRespondRaw` and `_networkHttpServerWaitRaw` through the shared bridge contracts, Node bridge globals, and native V8 bridge registries -- Fixed the native V8 response receiver so sync bridge calls only consume matching `call_id` responses and defer unrelated `BridgeResponse` frames back to the event loop; this unblocked bridged `http.createServer()` shutdown/wait flows that were previously timing out -- Propagated `SocketTable.shutdown()` to real host sockets so accepted external TCP connections observe EOF correctly, and filled the shared custom-global inventory gaps that the bridge policy test surfaced -- Files changed: .agent/contracts/node-bridge.md, native/v8-runtime/src/host_call.rs, native/v8-runtime/src/session.rs, packages/core/src/kernel/socket-table.ts, packages/core/src/shared/bridge-contract.ts, packages/core/src/shared/global-exposure.ts, packages/core/test/kernel/external-listen.test.ts, packages/nodejs/src/bridge-contract.ts, packages/nodejs/src/bridge-handlers.ts, packages/nodejs/src/bridge/network.ts, packages/nodejs/src/execution-driver.ts, packages/nodejs/test/kernel-http-bridge.test.ts, packages/nodejs/test/legacy-networking-policy.test.ts, packages/secure-exec/tests/bridge-registry-policy.test.ts, packages/v8/src/runtime.ts, packages/v8/test/runtime-binary-resolution-policy.test.ts -- Quality checks run: - - `cargo build --release` in `native/v8-runtime` ✅ - - `pnpm tsc -p packages/v8/tsconfig.json` ✅ - - `pnpm turbo run build --filter=@secure-exec/nodejs` ✅ - - `pnpm vitest run packages/secure-exec/tests/runtime-driver/node/index.test.ts -t "serves requests through bridged http.createServer and host network fetch|coerces 0.0.0.0 listen to loopback for strict sandboxing|can terminate a running sandbox HTTP server from host side|http.Agent with maxSockets=1 serializes concurrent requests"` ✅ - - `pnpm vitest run packages/core/test/kernel/external-listen.test.ts packages/nodejs/test/kernel-http-bridge.test.ts packages/nodejs/test/legacy-networking-policy.test.ts packages/v8/test/runtime-binary-resolution-policy.test.ts` ✅ - - `pnpm vitest run packages/secure-exec/tests/bridge-registry-policy.test.ts` ✅ -- **Learnings for future iterations:** - - Bridged HTTP server hangs can come from native response routing, not just JS bridge state; check whether sync bridge calls are consuming the wrong `BridgeResponse` - - `packages/v8/src/runtime.ts` prefers the local cargo-built runtime binary in `native/v8-runtime/target/{release,debug}` before packaged binaries, so rebuild that binary when changing native bridge/session code - - The custom-global inventory policy test is valuable for catching drift between bridge contracts and the actual runtime/global surface; update the inventory instead of weakening the test when the bridge surface legitimately grows ---- - -## 2026-03-24 20:07 PDT - US-041 -- What was implemented -- Fixed stale WasmVM C build inputs so the patched wasi-libc sysroot and C programs build locally again -- Corrected socket/syscall patch drift in the native wasm sysroot patches and fixed malformed patch application for `host_spawn_wait.c` -- Updated WasmVM socket handling so host-net sockets use worker-local FDs instead of raw kernel socket IDs, and normalized wasi-libc socket constants before routing into `SocketTable` -- Added cooperative signal polling during WASI `poll_oneoff` sleep so `signal_handler` observes pending SIGINT while sleeping -- Verified `native/wasmvm/c` programs compile and the `net-server`, `net-udp`, `net-unix`, and `signal-handler` WasmVM tests execute and pass -- Files changed -- `native/wasmvm/c/Makefile` -- `native/wasmvm/patches/wasi-libc/0002-spawn-wait.patch` -- `native/wasmvm/patches/wasi-libc/0008-sockets.patch` -- `native/wasmvm/patches/wasi-libc/0011-sigaction.patch` -- `native/wasmvm/scripts/patch-wasi-libc.sh` -- `packages/wasmvm/src/driver.ts` -- `packages/wasmvm/src/kernel-worker.ts` -- `packages/wasmvm/src/wasi-polyfill.ts` -- `packages/wasmvm/src/wasi-types.ts` -- **Learnings for future iterations:** -- Patterns discovered -- `host_net` imports from wasi-libc use bottom-half/WASI socket constants (`AF_INET=1`, `AF_UNIX=3`, `SOCK_DGRAM=5`, `SOCK_STREAM=6`), so the WasmVM bridge must normalize them before touching the shared kernel socket table -- Worker-local socket FDs need the same local-to-kernel mapping discipline as files/pipes; raw kernel socket IDs are not safe to expose to WASM code -- Gotchas encountered -- `poll_oneoff` sleep is entirely local to the worker unless you explicitly tick back through RPC, so pending cooperative signals will starve during `usleep()` loops -- The old `0002-spawn-wait.patch` add-file header was malformed (`+++ libc-bottom-half/...`), which causes patch application to place the file outside the intended vendor path -- Useful context -- The CI failure on this branch was not just the reported crossterm symptom; the first hard failures were in the patched wasi-libc sysroot/socket/signal patch application path and stale zlib/minizip fetch URLs ---- - -## 2026-03-24 20:39 PDT - US-042 -- What was implemented -- Wired `KernelImpl` to own and expose `timerTable`, clear process timers on exit, and dispose timer state with the kernel -- Replaced bridge-local timer and active-handle tracking with kernel-backed dispatch handlers so Node.js bridge budgets are enforced by `TimerTable` and `ProcessTable` -- Added `_timerDispatch` stream delivery so host timers invoke bridge callbacks without leaving standalone `exec()` stuck on pending async bridge promises -- Added focused core and nodejs tests covering kernel timer exposure, process-exit cleanup, and kernel-backed timer/handle budget enforcement -- Files changed -- `packages/core/src/kernel/kernel.ts` -- `packages/core/src/kernel/types.ts` -- `packages/core/src/index.ts` -- `packages/core/test/kernel/kernel-integration.test.ts` -- `packages/core/src/shared/bridge-contract.ts` -- `packages/core/src/shared/global-exposure.ts` -- `packages/core/isolate-runtime/src/common/runtime-globals.d.ts` -- `packages/nodejs/src/bridge/process.ts` -- `packages/nodejs/src/bridge/active-handles.ts` -- `packages/nodejs/src/bridge/dispatch.ts` -- `packages/nodejs/src/bridge-handlers.ts` -- `packages/nodejs/src/execution-driver.ts` -- `packages/nodejs/src/isolate-bootstrap.ts` -- `packages/nodejs/src/kernel-runtime.ts` -- `packages/nodejs/src/bridge-contract.ts` -- `packages/nodejs/test/kernel-resource-bridge.test.ts` -- `native/v8-runtime/src/stream.rs` -- `.agent/contracts/kernel.md` -- `.agent/contracts/node-runtime.md` -- **Learnings for future iterations:** -- Patterns discovered -- Kernel-backed bridge operations fit best behind `_loadPolyfill` `__bd:` dispatch handlers; only add a runtime global when the host needs to push an event into the isolate, like `_timerDispatch` -- Standalone `NodeRuntime.exec()` and kernel-managed `node` processes need different timer-liveness semantics; standalone mode should clean up host timers without treating them as resources that keep `exec()` open -- Gotchas encountered -- Driving timer callbacks through pending async bridge promises causes delayed timers to keep standalone executions alive until timeout; use stream-event delivery for timer callbacks instead -- Kernel budget errors need bridge-side mapping back to the existing `ERR_RESOURCE_BUDGET_EXCEEDED` shapes so current tests and user-facing errors stay stable -- Useful context -- The focused `kernel-resource-bridge` test exercises the external-kernel path directly by injecting a shared `ProcessTable` and `TimerTable` into `NodeExecutionDriver` ---- - -## 2026-03-24 20:50 PDT - US-043 -- What was implemented -- Routed WasmVM `net_setsockopt` through the kernel socket table instead of returning `ENOSYS` -- Added `netGetsockopt` and `net_getsockopt` plumbing so socket options round-trip across the worker RPC boundary as raw bytes -- Tightened WasmVM socket address parsing so AF_INET sockets reject path-style addresses with `EINVAL` instead of being misrouted as AF_UNIX -- Files changed -- `CLAUDE.md` -- `packages/wasmvm/src/driver.ts` -- `packages/wasmvm/src/kernel-worker.ts` -- `packages/wasmvm/test/net-socket.test.ts` -- `scripts/ralph/prd.json` -- `scripts/ralph/progress.txt` -- **Learnings for future iterations:** -- Patterns discovered -- WasmVM `host_net` passes socket option values as little-endian byte slices, not JS numbers; convert at the driver boundary before calling `kernel.socketTable` -- `kernel-worker.ts` should stay as a thin marshal layer for `host_net` imports; keep kernel semantics in `packages/wasmvm/src/driver.ts` -- Gotchas encountered -- For WasmVM socket RPCs, only AF_UNIX sockets should treat colon-free addresses as paths; AF_INET/AF_INET6 should reject them with `EINVAL` -- Useful context -- The focused validation for this path is `pnpm vitest run packages/wasmvm/test/net-socket.test.ts` plus `pnpm tsc --noEmit` from `packages/wasmvm` ---- - -## 2026-03-24 20:59 PDT - US-044 -- What was implemented -- Added signal-delivery tracking to `ProcessTable` and a signal-aware blocking mode on `SocketTable.accept()` / `SocketTable.recv()` so blocking waits now return `EINTR` or transparently restart when the delivered handler carries `SA_RESTART` -- Wired `KernelImpl` to provide `getSignalState` to the shared socket table and added focused kernel tests for `recv` EINTR, `recv` restart, and `accept` restart behavior -- Updated the kernel contract to document socket wait interruption semantics -- Files changed -- `.agent/contracts/kernel.md` -- `packages/core/src/kernel/kernel.ts` -- `packages/core/src/kernel/process-table.ts` -- `packages/core/src/kernel/socket-table.ts` -- `packages/core/src/kernel/types.ts` -- `packages/core/src/kernel/wait.ts` -- `packages/core/test/kernel/signal-handlers.test.ts` -- `scripts/ralph/prd.json` -- `scripts/ralph/progress.txt` -- **Learnings for future iterations:** -- Patterns discovered -- Signal-aware socket waits need both an edge-trigger (`signalWaiters`) and a monotonic sequence (`deliverySeq`) to avoid lost wake-ups when a signal lands between the pre-check and waiter registration -- Keep `SocketTable` backward-compatible by layering blocking signal semantics behind overloads/options instead of changing the existing immediate `accept()` / `recv()` behavior used across the bridge and tests -- Gotchas encountered -- `SA_RESTART` only matters for delivered handlers; ignored signals and default-ignored `SIGCHLD` should not spuriously wake blocking socket waits -- Wait queues need explicit waiter removal for `Promise.race()`-style waits or settled signal/socket handles accumulate in the queue -- Useful context -- Focused validation for this path is `pnpm tsc --noEmit -p packages/core/tsconfig.json`, `pnpm tsc --noEmit -p packages/nodejs/tsconfig.json`, and `pnpm vitest run packages/core/test/kernel/signal-handlers.test.ts packages/core/test/kernel/socket-table.test.ts packages/core/test/kernel/socket-flags.test.ts packages/core/test/kernel/socket-shutdown.test.ts packages/core/test/kernel/loopback.test.ts` ---- - -## 2026-03-24 21:10 PDT - US-046 -- What was implemented -- Added bounded listener backlogs to `SocketTable.listen()` and refused excess loopback connections with `ECONNREFUSED` instead of letting pending connections grow without limit -- Added kernel-managed ephemeral port assignment for `bind({ port: 0 })` in the 49152-65535 range, while preserving the original port-0 intent so external host-backed listeners still delegate ephemeral selection to the host adapter -- Updated the kernel contract and root agent guidance to capture the backlog and ephemeral-port expectations -- Quality checks run: -- `pnpm tsc --noEmit -p packages/core/tsconfig.json` ✅ -- `pnpm vitest run packages/core/test/kernel/socket-table.test.ts packages/core/test/kernel/external-listen.test.ts` ✅ -- `pnpm vitest run packages/core/test/kernel/loopback.test.ts` ✅ -- Files changed -- `.agent/contracts/kernel.md` -- `CLAUDE.md` -- `packages/core/src/kernel/socket-table.ts` -- `packages/core/test/kernel/socket-table.test.ts` -- `scripts/ralph/prd.json` -- `scripts/ralph/progress.txt` -- **Learnings for future iterations:** -- Patterns discovered -- `listen(backlog)` needs a stored per-socket backlog limit because both loopback `connect()` and the external accept pump enqueue through the same listener backlog -- Preserving `port: 0` intent separately from the kernel-assigned temporary port avoids breaking external listeners that still need host-side ephemeral assignment -- Gotchas encountered -- `AGENTS.md` is a symlink to `CLAUDE.md` at repo root, so updating root agent guidance shows up as a `CLAUDE.md` diff -- Useful context -- Focused regression coverage for this story is `packages/core/test/kernel/socket-table.test.ts`, `packages/core/test/kernel/external-listen.test.ts`, and `packages/core/test/kernel/loopback.test.ts` ---- - -## 2026-03-24 21:47 PDT - US-048 -- What was implemented -- Validated the existing `US-048` inode/VFS integration work in the dirty tree instead of adding more code this turn -- Confirmed `pnpm tsc --noEmit` and `pnpm vitest run test/kernel/inode-table.test.ts` pass in `packages/core` -- Confirmed the full `packages/core` suite is still blocked by the unrelated PTY stress failure in `test/kernel/resource-exhaustion.test.ts` (`single large write (1MB+) — immediate EAGAIN, no partial buffering`, assertion at line 270) -- Checked recent branch CI history with `gh run list`; recent PR runs on `ralph/kernel-consolidation` were already failing before this story was ready to commit -- Files changed -- `scripts/ralph/progress.txt` -- **Learnings for future iterations:** -- Patterns discovered -- When a full-package gate is already red, record both the focused story checks and the first failing broad-suite test so the next iteration can separate story regressions from branch-wide blockers quickly -- Gotchas encountered -- `US-048` appears implementation-complete locally, but it should not be committed while `packages/core` is still red on the unrelated PTY resource-exhaustion test -- Useful context -- Current green checks: `pnpm tsc --noEmit` and `pnpm vitest run test/kernel/inode-table.test.ts` from `packages/core`; current blocking check: `pnpm vitest run` ---- - -## 2026-03-24 21:55 PDT - US-048 -- What was implemented -- Completed the `US-048` inode/VFS integration by wiring `kernel.inodeTable` into `KernelImpl` and `InMemoryFileSystem`, tracking stable inode IDs through file creation, stat, hard links, unlink, and last-FD cleanup -- Updated kernel FD lifecycle paths to keep inode-backed access alive after unlink via `FileDescription.inode`, including read/write, pread/pwrite, seek/stat, dup2 replacement, inherited FD overrides, and whole-process teardown -- Added inode integration coverage for real `ino`/`nlink`, deferred unlink readability, last-close cleanup, and `pwrite` on unlinked open files -- Unblocked package quality gates with a type-only isolate-runtime globals declaration fix and a PTY raw-mode bulk-write fix so oversized writes with `icrnl` enabled fail atomically with `EAGAIN` -- Quality checks run -- `pnpm --dir packages/core run check-types` ✅ -- `pnpm --dir packages/core test` ✅ -- Files changed -- `.agent/contracts/kernel.md` -- `AGENTS.md` -- `packages/core/isolate-runtime/src/common/runtime-globals.d.ts` -- `packages/core/src/kernel/kernel.ts` -- `packages/core/src/kernel/pty.ts` -- `packages/core/src/kernel/types.ts` -- `packages/core/src/shared/in-memory-fs.ts` -- `packages/core/test/kernel/inode-table.test.ts` -- `scripts/ralph/prd.json` -- `scripts/ralph/progress.txt` -- **Learnings for future iterations:** -- Patterns discovered -- `KernelImpl` needs access to the raw `InMemoryFileSystem` alongside the wrapped VFS so open FDs can keep reading, writing, and stat'ing by inode after pathname removal -- File-description cleanup is broader than `fdClose()`; `dup2()` replacement, stdio overrides during spawn, and process-table teardown all need inode refcount release when a shared description reaches `refCount === 0` -- `InMemoryFileSystem.reindexInodes()` must preserve shared inode identity across hard links when rebinding an existing filesystem to the kernel-owned inode table -- Gotchas encountered -- The package `check-types` gate also covers `isolate-runtime`, so missing runtime-global declarations can block kernel stories even when `packages/core/src` itself typechecks -- PTY raw mode still respects `icrnl`; bulk-write fast paths must keep translation and buffer-limit enforcement atomic to avoid partial buffering on `EAGAIN` -- Useful context -- Full `packages/core` now passes again, including the previously failing `test/kernel/resource-exhaustion.test.ts` ---- - -## 2026-03-24 22:02 PDT - US-049 -- What was implemented -- Added synthetic `.` and `..` entries to `InMemoryFileSystem` directory listings, with optional inode metadata on `VirtualDirEntry` so self/parent entries can carry the correct directory identity -- Added focused inode/VFS tests for `/tmp` listings, self/parent inode numbers, and root `..` behavior -- Filtered those POSIX-only entries back out in the Node bridge `fsReadDir` handler so sandbox `fs.readdir()` keeps Node-compatible output -- Added a Node bridge regression test covering the filter -- Updated the kernel contract for the in-memory VFS directory-listing rule -- Files changed -- `.agent/contracts/kernel.md` -- `packages/core/src/kernel/vfs.ts` -- `packages/core/src/shared/in-memory-fs.ts` -- `packages/core/test/kernel/inode-table.test.ts` -- `packages/nodejs/src/bridge-handlers.ts` -- `packages/nodejs/test/kernel-resource-bridge.test.ts` -- `scripts/ralph/prd.json` -- `scripts/ralph/progress.txt` -- Quality checks: `pnpm tsc --noEmit -p packages/core/tsconfig.json` passed; `pnpm vitest run packages/core/test/kernel/inode-table.test.ts` passed; `pnpm tsc --noEmit -p packages/nodejs/tsconfig.json` passed; `pnpm vitest run packages/nodejs/test/kernel-resource-bridge.test.ts` passed; extra integration check `pnpm vitest run packages/secure-exec/tests/kernel/vfs-consistency.test.ts` failed in pre-existing cross-runtime VFS coverage (`expected '' to contain 'hello'` in `kernel write visible to Node`) -- **Learnings for future iterations:** -- Patterns discovered -- `VirtualDirEntry` can grow optional metadata like `ino` without disturbing existing bridge consumers, as long as Node-facing code still only depends on `name` and `isDirectory` -- POSIX-style directory enumeration and Node `fs.readdir()` have different expectations for `.` / `..`; normalize that difference at the Node bridge boundary, not in the shared VFS -- Gotchas encountered -- Adding `.` / `..` at the VFS layer would leak into sandbox Node `fs.readdir()` unless `buildFsBridgeHandlers()` filters them before serializing directory entries -- Useful context -- Story-local green checks are the focused `packages/core` and `packages/nodejs` typecheck/test commands above; `packages/secure-exec/tests/kernel/vfs-consistency.test.ts` is still failing outside this change path and needs separate debugging ---- - -## 2026-03-24 22:28 PDT - US-052 -- What was implemented -- Added `writeWaiters`-backed blocking pipe writes in `PipeManager`, with bounded partial-progress writes, `O_NONBLOCK` handling, and wakeups on buffer drain and endpoint close -- Added focused pipe tests for full-buffer blocking, non-blocking `EAGAIN`, partial-write continuation, and blocked-writer `EPIPE` on read-end close -- Updated the kernel contract for blocking pipe write semantics and added the missing kernel `O_NONBLOCK` flag constant used by pipe descriptions -- Files changed -- `AGENTS.md` -- `.agent/contracts/kernel.md` -- `packages/core/src/kernel/pipe-manager.ts` -- `packages/core/src/kernel/types.ts` -- `packages/core/test/kernel/pipe-manager.test.ts` -- `packages/core/test/kernel/resource-exhaustion.test.ts` -- `scripts/ralph/prd.json` -- `scripts/ralph/progress.txt` -- **Learnings for future iterations:** -- Patterns discovered -- Bounded blocking writes should preserve partial progress: fill the remaining buffer capacity first, then wait only for the unwritten tail -- Pipe producer waits need wakeups from both successful reads and close/error paths, or blocked writers can hang forever after the consumer disappears -- Gotchas encountered -- `KernelInterface.fdWrite()` already allows `number | Promise`, so pipe writes can become async without widening the kernel interface -- Useful context -- Focused green checks for this story were `pnpm vitest run packages/core/test/kernel/pipe-manager.test.ts packages/core/test/kernel/resource-exhaustion.test.ts` and `pnpm tsc --noEmit` in `packages/core` ---- - -## 2026-03-24 22:42 PDT - US-053 -- What was implemented -- Added pipe poll wait queues in `PipeManager` plus a kernel-only `fdPollWait` helper so `poll()` can sleep on pipe state changes instead of spinning or timing out spuriously -- Refactored WasmVM `netPoll` to re-check all FDs in a loop, using finite timeout budgets for bounded polls and repeated `RPC_WAIT_TIMEOUT_MS` chunks for `timeout=-1` -- Updated the WasmVM worker RPC path so `netPoll` with `timeout < 0` keeps waiting across the worker's 30s guard timeout instead of returning `EIO` -- Added a pipe-backed WasmVM regression test that blocks on `poll(-1)`, writes to the pipe asynchronously, and verifies `POLLIN` wakes the poller -- Files changed -- `packages/core/src/kernel/kernel.ts` -- `packages/core/src/kernel/pipe-manager.ts` -- `packages/wasmvm/src/driver.ts` -- `packages/wasmvm/src/kernel-worker.ts` -- `packages/wasmvm/test/net-socket.test.ts` -- `scripts/ralph/prd.json` -- `scripts/ralph/progress.txt` -- Quality checks: `pnpm turbo run build --filter=@secure-exec/core --filter=@secure-exec/wasmvm` passed; `pnpm tsc --noEmit -p packages/core/tsconfig.json` passed; `pnpm tsc --noEmit -p packages/wasmvm/tsconfig.json` passed; `pnpm vitest run packages/wasmvm/test/net-socket.test.ts` passed -- **Learnings for future iterations:** -- Patterns discovered -- Cross-package WasmVM tests that import `@secure-exec/core` need the package rebuilt first or they will run stale `dist` code and miss new kernel behavior -- Pipe-backed `poll()` support works best as a generic state-change queue: wake it on writes, drains, and closes, then let the caller re-run `fdPoll()` to compute exact readiness bits -- Gotchas encountered -- Fixing `poll(-1)` only in the main-thread driver is insufficient because the worker RPC layer has its own 30s `Atomics.wait()` guard; indefinite polls need both sides to cooperate -- Useful context -- The new regression coverage lives in `packages/wasmvm/test/net-socket.test.ts` and exercises the private `_handleSyscall('netPoll')` path with a mock kernel pipe, which is enough to validate the wait/wake integration without running a full WASM program ---- - -## 2026-03-24 23:01 PDT - US-054 -- What was implemented -- Added a read-only proc pseudo-filesystem in `packages/core/src/kernel/proc-layer.ts` and mounted it during kernel init so `/proc//{fd,cwd,exe,environ}` is generated from live `ProcessTable` and `FDTableManager` state -- Added shared `/proc/self` resolution helpers and wired them into the Node kernel runtime VFS and WasmVM VFS RPC path so sandboxed processes see their own `/proc/self/*` -- Added kernel integration coverage for `/proc/self/fd` listings, `/proc/self/fd/0` readlink, `/proc/self/cwd` reads, and `/proc//environ`, then updated the kernel contract for procfs behavior -- Files changed -- `.agent/contracts/kernel.md` -- `packages/core/src/index.ts` -- `packages/core/src/kernel/index.ts` -- `packages/core/src/kernel/kernel.ts` -- `packages/core/src/kernel/proc-layer.ts` -- `packages/core/test/kernel/kernel-integration.test.ts` -- `packages/nodejs/src/kernel-runtime.ts` -- `packages/wasmvm/src/driver.ts` -- **Learnings for future iterations:** -- Patterns discovered -- The shared kernel VFS cannot infer a “current process”, so pseudo-filesystems with self-references need a split design: dynamic `/proc/` entries in core and thin runtime-side `/proc/self` rewriting where PID context exists -- Gotchas encountered -- Cross-package `@secure-exec/core` imports in `@secure-exec/nodejs` and `@secure-exec/wasmvm` typechecks will read stale exports until `pnpm turbo run build --filter=@secure-exec/core` refreshes the core package output -- Useful context -- Focused green checks for this story were `pnpm turbo run build --filter=@secure-exec/core --filter=@secure-exec/nodejs --filter=@secure-exec/wasmvm`, `pnpm tsc --noEmit -p packages/core/tsconfig.json`, `pnpm tsc --noEmit -p packages/nodejs/tsconfig.json`, `pnpm tsc --noEmit -p packages/wasmvm/tsconfig.json`, and `pnpm vitest run packages/core/test/kernel/kernel-integration.test.ts -t "/proc pseudo-filesystem"` ---- - -## 2026-03-24 23:05 PDT - US-055 -- Implemented `SA_RESETHAND` support in the kernel signal types and exports, and reset one-shot handlers to default disposition after their first delivery -- Updated `ProcessTable` signal dispatch so `SA_RESETHAND | SA_RESTART` both work, with the reset happening before pending signals are re-delivered -- Added kernel signal tests covering one-shot handler reset, second-delivery default action, and `SA_RESETHAND | SA_RESTART` restart behavior -- Files changed: `.agent/contracts/kernel.md`, `packages/core/src/kernel/index.ts`, `packages/core/src/kernel/process-table.ts`, `packages/core/src/kernel/types.ts`, `packages/core/test/kernel/signal-handlers.test.ts`, `scripts/ralph/prd.json`, `scripts/ralph/progress.txt` -- **Learnings for future iterations:** - - One-shot signal reset ordering matters: update the handler disposition before `deliverPendingSignals()` so a same-signal pending delivery does not invoke the old callback twice - - `ProcessTable.dispatchSignal()` records delivery flags before running the user handler, so combined flags like `SA_RESETHAND | SA_RESTART` can affect both the interrupted syscall and the post-handler disposition reset - - Kernel signal behavior is contract-backed in `.agent/contracts/kernel.md`; signal semantic changes should update that contract alongside the code ---- - -## 2026-03-24 23:26 PDT - US-056 -- What was implemented -- Finished the remaining Node.js ESM parity gap by propagating async entrypoint promise rejections out of the native V8 runtime, fixing dynamic import missing-module/syntax/evaluation failures to produce non-zero exec results, and making dynamic import resolution use `"import"` conditions without breaking `require()` condition routing -- Regenerated the isolate-runtime bundle, updated the Node runtime contract and compatibility/friction docs to record the corrected ESM behavior, and marked the story complete in the PRD -- Files changed -- `.agent/contracts/node-runtime.md` -- `docs-internal/friction.md` -- `docs/nodejs-compatibility.mdx` -- `native/v8-runtime/src/execution.rs` -- `native/v8-runtime/src/isolate.rs` -- `native/v8-runtime/src/snapshot.rs` -- `packages/core/isolate-runtime/src/inject/setup-dynamic-import.ts` -- `packages/core/src/generated/isolate-runtime.ts` -- `packages/nodejs/src/bridge-handlers.ts` -- `scripts/ralph/prd.json` -- `scripts/ralph/progress.txt` -- **Learnings for future iterations:** - - Patterns discovered - - Native V8 runtime package tests use the release binary when it exists, so native runtime changes need a release rebuild or the focused Vitest slice will keep exercising stale host code - - Isolate-runtime source changes only take effect in package tests after regenerating `packages/core/src/generated/isolate-runtime.ts` - - Gotchas encountered - - Arrow-function bridge handlers do not provide a safe `arguments` object for extra dispatch parameters; accept optional bridge args explicitly when resolution mode needs to cross the boundary - - Useful context - - Focused validation for this story passed with `cargo test execution::tests::v8_consolidated_tests -- --nocapture`, `pnpm turbo run build --filter=@secure-exec/core --filter=@secure-exec/nodejs --filter=secure-exec`, `pnpm run check-types` in `packages/core`, `packages/nodejs`, and `packages/secure-exec`, plus `pnpm exec vitest run packages/secure-exec/tests/runtime-driver/node/index.test.ts -t "dynamic import|built-in ESM imports|package exports|type module"` +- Vendored Node conformance helpers in `packages/secure-exec/tests/node-conformance/common/` must include sibling support files like `countdown.js`; missing common shims can masquerade as runtime regressions across many unrelated tests +- `http.Agent` pool progress under `maxTotalSockets` depends on evicting idle free sockets from other origins when the total socket budget is exhausted; otherwise cross-origin queues can deadlock even if per-origin logic looks correct +- Dispatch-multiplexed bridge globals must preserve `undefined` positional args plus host error `name`/`code`; raw `JSON.stringify(args)` and message-only errors break Node conformance in subtle ways +- For stateful Node crypto APIs that browserify only partially emulates, bridge the real host objects by session id and forward methods instead of trying to recreate DH/ECDH state inside the isolate +- Global WebCrypto is split across `packages/nodejs/src/bridge/process.ts` and `packages/core/isolate-runtime/src/inject/require-setup.ts`; keep `globalThis.crypto` and `require('crypto').webcrypto` aliased to the same object or conformance diverges +- `CryptoKey` is bridged from both `process.ts` and `require-setup.ts`; keep their `instanceof` semantics aligned (for example via shared `Symbol.hasInstance` behavior) or `KeyObject.toCryptoKey()` conformance breaks even when key data is correct +- When the session cipher bridge is unavailable, crypto constructor-time validation still has to happen eagerly through the one-shot host bridge; otherwise Node conformance misses `createCipheriv()`/`createDecipheriv()` throw-on-construction cases +- `bridge-initial-globals.ts` intentionally removes `SharedArrayBuffer`; when vendored conformance files hard-require it, classify the expectation as a `security-constraint` instead of treating the failure as a crypto implementation bug +- When a crypto/node-conformance expectation reason looks stale, temporarily remove just that entry and rerun the exact vendored file filter; the runner prints the real sandbox stderr, which is often a missing fixture or error-shape mismatch rather than the older bridge-gap guess +- Host-side ESM packages in `packages/nodejs/src/` cannot use bare `require(...)`; standalone `dist/` execution runs them as real ESM, so bridge helpers must use static imports or `createRequire()` +- Standalone `dist/` smoke tests that spawn `node --input-type=module -e ...` should flush `process.stdout.write(...)` via a callback and then `process.exit(0)`; otherwise the verification script can print the right JSON but linger long enough for Vitest to hit its timeout +- Conformance tests live in packages/secure-exec/tests/node-conformance/ — vendored Node.js v22.14.0 test/parallel/ +- Runner is packages/secure-exec/tests/node-conformance/runner.test.ts — run with: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts +- Expectations are in packages/secure-exec/tests/node-conformance/expectations.json — each entry has expected (pass/fail/skip), reason, category +- To run tests for a specific module: pnpm vitest run packages/secure-exec/tests/node-conformance/runner.test.ts -t "node/crypto" +- After changing expectations.json, regenerate report: pnpm tsx scripts/generate-node-conformance-report.ts +- Bridge code is in packages/nodejs/src/bridge/ — network.ts, fs.ts, process.ts, child-process.ts, module.ts +- Bridge handlers are in packages/nodejs/src/bridge-handlers.ts — dispatches bridge calls from isolate +- After changing packages/core/isolate-runtime/src/inject/require-setup.ts or Node bridge code, rebuild in this order: pnpm --filter @secure-exec/nodejs build, then pnpm --filter @secure-exec/core build +- Kernel is in packages/core/src/kernel/ — socket-table.ts, kernel.ts, process-table.ts, pipe-manager.ts +- VFS is packages/core/src/shared/in-memory-fs.ts +- Node driver is packages/nodejs/src/driver.ts, execution driver is packages/nodejs/src/execution-driver.ts +- Host network adapter is packages/nodejs/src/host-network-adapter.ts +- After editing bridge code, rebuild: pnpm turbo run build --filter=@secure-exec/nodejs +- Use pnpm, vitest, tsc for type checks +- Common test shims are in packages/secure-exec/tests/node-conformance/common/ — index.js, tmpdir.js, fixtures.js, crypto.js +- TLS test fixtures (certs, keys) are in packages/secure-exec/tests/node-conformance/fixtures/ — must be loaded into VFS for TLS/HTTPS tests +- Glob patterns in expectations.json match multiple tests (e.g., "test-net-*.js") — individual "pass" overrides take priority +- Expected-fail tests that suddenly pass will cause the runner to throw "now passes! Remove its expectation" +- When fixing a batch of tests, remove their entries from expectations.json so they run as genuine passes +- CLAUDE.md has the full testing policy and project conventions + +## [2026-03-25 02:24 PDT] - US-001 +- Implemented host-backed asymmetric crypto bridges for `createPublicKey`/`createPrivateKey`, `sign`/`verify`, and RSA encrypt/decrypt paths so sandbox `KeyObject` metadata and key-option bags match Node semantics. +- Fixed vendored crypto test helpers for encrypted EC key regex matching and removed stale conformance expectations for newly passing keygen tests. +- Files changed: `AGENTS.md`, `packages/nodejs/src/bridge-contract.ts`, `packages/nodejs/src/bridge-handlers.ts`, `packages/core/src/shared/bridge-contract.ts`, `packages/core/src/shared/global-exposure.ts`, `packages/core/isolate-runtime/src/inject/require-setup.ts`, `packages/core/src/generated/isolate-runtime.ts`, `packages/secure-exec/tests/node-conformance/common/crypto.js`, `packages/secure-exec/tests/node-conformance/expectations.json`, `packages/secure-exec/tests/node-conformance/conformance-report.json`, `docs/nodejs-conformance-report.mdx`, `packages/secure-exec/tests/test-suite/node/crypto.ts`, `scripts/ralph/prd.json` +- **Learnings for future iterations:** + - The conformance runner executes built `dist` artifacts, so source-only bridge changes will look ineffective until `@secure-exec/nodejs` is rebuilt and `@secure-exec/core` is rebuilt afterward. + - Browserify crypto parsing is still a gap for DER/options-bag asymmetric APIs; routing those paths through host Node crypto is more reliable than trying to normalize every key shape inside the polyfill. + - When host crypto errors cross the bridge, normalize `error.code` in the isolate if tests snapshot full Node-style error objects. +--- + +## [2026-03-25 02:50 PDT] - US-002 +- Implemented host-backed `crypto.generateKeySync`/`generateKey` and `crypto.generatePrimeSync`/`generatePrime`, fixed async `generateKeyPair` argument normalization for EdDSA/DH cases, and preserved structured bridge error metadata plus `undefined` dispatch args so keygen validation matches Node. +- Removed stale crypto conformance expectations for newly passing keygen tests and regenerated the node conformance report. +- Files changed: `.agent/contracts/node-bridge.md`, `packages/nodejs/src/bridge-contract.ts`, `packages/core/src/shared/bridge-contract.ts`, `packages/core/src/shared/global-exposure.ts`, `packages/nodejs/src/bridge/dispatch.ts`, `packages/nodejs/src/execution-driver.ts`, `packages/nodejs/src/bridge-handlers.ts`, `packages/core/isolate-runtime/src/inject/require-setup.ts`, `packages/core/src/generated/isolate-runtime.ts`, `packages/secure-exec/tests/test-suite/node/crypto.ts`, `packages/secure-exec/tests/node-conformance/expectations.json`, `packages/secure-exec/tests/node-conformance/conformance-report.json`, `docs/nodejs-conformance-report.mdx`, `scripts/ralph/prd.json` +- **Learnings for future iterations:** + - Async crypto wrappers need Node-style split behavior: synchronous throw for argument validation, callback delivery for operational keygen failures. + - Bridge dispatch bugs can masquerade as API bugs; preserve both `undefined` arguments and structured errors before debugging higher-level crypto behavior. + - When a conformance expected-fail starts passing, remove the expectation first and rerun the exact file filter to confirm it is a genuine pass before regenerating the report. +--- + +## [2026-03-25 03:01 PDT] - US-003 +- Implemented host-backed Diffie-Hellman and ECDH session bridging, including `createDiffieHellman`, `createDiffieHellmanGroup`/`getDiffieHellman`, `createECDH`, and stateless `crypto.diffieHellman()` so key generation, shared-secret derivation, and encoding handling use native Node crypto semantics. +- Added focused runtime coverage for DH group exchange and stateless X25519 key agreement, removed the stale `test-crypto-dh-padding.js` conformance expectation, and regenerated the conformance report. +- Files changed: `.agent/contracts/node-bridge.md`, `packages/nodejs/src/bridge-contract.ts`, `packages/core/src/shared/bridge-contract.ts`, `packages/core/src/shared/global-exposure.ts`, `packages/nodejs/src/bridge-handlers.ts`, `packages/core/isolate-runtime/src/inject/require-setup.ts`, `packages/core/src/generated/isolate-runtime.ts`, `packages/secure-exec/tests/test-suite/node/crypto.ts`, `packages/secure-exec/tests/node-conformance/expectations.json`, `packages/secure-exec/tests/node-conformance/conformance-report.json`, `docs/nodejs-conformance-report.mdx`, `scripts/ralph/prd.json` +- **Learnings for future iterations:** + - Host-managed crypto objects with mutable internal state are safer to expose through a session-id bridge than by trying to mirror their state in sandbox JavaScript. + - `crypto.getDiffieHellman()` returns a different host type than `createDiffieHellman()`, so shared bridge maps should type for both `DiffieHellman` and `DiffieHellmanGroup`. + - Rebuilding `@secure-exec/nodejs` is what refreshes `packages/core/src/generated/isolate-runtime.ts`; commit the generated manifest alongside `require-setup.ts` changes. +--- + +## [2026-03-25 03:12 PDT] - US-004 +- Replaced the stub global WebCrypto surface with a class-based bridge in `packages/nodejs/src/bridge/process.ts`, including `Crypto`, `SubtleCrypto`, and `CryptoKey` constructor semantics, receiver validation, host-backed subtle method forwarding, and Node-compatible `getRandomValues`/`randomUUID` behavior. +- Updated the isolate bootstrap to alias `require('crypto').webcrypto` to the same global WebCrypto object, added focused runtime coverage for the global alias/receiver path, removed stale conformance expectations for newly passing global WebCrypto tests, and regenerated the node conformance report. +- Files changed: `.agent/contracts/node-bridge.md`, `packages/nodejs/src/bridge/process.ts`, `packages/core/isolate-runtime/src/inject/require-setup.ts`, `packages/core/src/generated/isolate-runtime.ts`, `packages/secure-exec/tests/test-suite/node/crypto.ts`, `packages/secure-exec/tests/node-conformance/expectations.json`, `packages/secure-exec/tests/node-conformance/conformance-report.json`, `docs/nodejs-conformance-report.mdx`, `scripts/ralph/prd.json` +- **Learnings for future iterations:** + - WebCrypto conformance splits between the always-present global bridge (`process.ts`) and the `require('crypto')` overlay (`require-setup.ts`); fixing only one side leaves the other test family broken. + - The conformance runner treats newly passing expected-fails as hard failures, so expectation cleanup is part of the implementation, not a follow-up task. + - `getRandomValues` needs both Node-style detached-receiver validation and DOMException-shaped argument errors; plain object methods or generic `TypeError`s are not enough. +--- + +## [2026-03-25 03:43 PDT] - US-005 +- Implemented host-backed WebCrypto sign/verify, ECDH deriveBits/deriveKey, and AES-KW wrapKey/unwrapKey paths in the crypto bridge, including algorithm normalization for ECDH public keys and label/context-like binary inputs. +- Aligned `CryptoKey` interoperability across the global bridge and `require('crypto')` overlay so `KeyObject.toCryptoKey()` produces keys that satisfy `instanceof CryptoKey` and round-trip through `KeyObject.from()`, then updated crypto conformance expectations/report outputs. +- Files changed: `packages/nodejs/src/bridge-handlers.ts`, `packages/nodejs/src/bridge/process.ts`, `packages/core/isolate-runtime/src/inject/require-setup.ts`, `packages/core/src/generated/isolate-runtime.ts`, `packages/secure-exec/tests/test-suite/node/crypto.ts`, `packages/secure-exec/tests/node-conformance/expectations.json`, `packages/secure-exec/tests/node-conformance/conformance-report.json`, `docs/nodejs-conformance-report.mdx`, `scripts/ralph/prd.json` +- **Learnings for future iterations:** + - `KeyObject.toCryptoKey()` conformance can fail on constructor identity alone; matching serialized key data is not sufficient if the global and module `CryptoKey` classes diverge. + - When a crypto conformance test starts running instead of self-skipping, convert stale `vacuous-skip` expectations to a specific `implementation-gap` or `unsupported-module` reason immediately so the filtered suite stays actionable. + - For WebCrypto derivation algorithms, normalize nested key references and binary algorithm fields before bridge dispatch or host Node crypto receives shape-mismatched payloads. +--- +## [2026-03-25 04:04 PDT] - US-006 +- Replaced the crypto hash and cipher overlays with `stream.Transform`-compatible wrappers, including callable `Hash`/`Cipheriv`/`Decipheriv` constructors, constructor-time host validation, buffered CCM/AAD handling, and host-backed auth-tag propagation for both GCM and CCM paths. +- Tightened crypto validation behavior by normalizing Node-style `ERR_INVALID_ARG_TYPE` / `ERR_OUT_OF_RANGE` pbkdf2 errors, restoring `crypto.getFips()` in the overlay, and cleaning stale conformance expectations for newly passing crypto stream/DH/ECB cases while keeping `SharedArrayBuffer`-driven pbkdf2 coverage as an explicit security-constraint expectation. +- Files changed: `.agent/contracts/node-bridge.md`, `packages/core/isolate-runtime/src/inject/require-setup.ts`, `packages/core/src/generated/isolate-runtime.ts`, `packages/core/src/shared/bridge-contract.ts`, `packages/nodejs/src/bridge-contract.ts`, `packages/nodejs/src/bridge-handlers.ts`, `packages/secure-exec/tests/test-suite/node/crypto.ts`, `packages/secure-exec/tests/node-conformance/expectations.json`, `packages/secure-exec/tests/node-conformance/conformance-report.json`, `docs/nodejs-conformance-report.mdx`, `scripts/ralph/prd.json`, `scripts/ralph/progress.txt` +- **Learnings for future iterations:** + - Cipher constructor assertions in Node’s vendored tests still need eager host validation even when the real streaming session bridge is unavailable; a validate-only call through the one-shot bridge keeps those semantics aligned without buffering ciphertext early. + - CCM support needs buffered AAD/authTagLength handling in the isolate wrapper plus AEAD-aware auth-tag serialization on the host; fixing only one side leaves `getAuthTag()` and stream-mode CCM tests broken. +- `SharedArrayBuffer` is removed by design in the sandbox bootstrap, so conformance failures that stem from that global disappearing should be tracked as `security-constraint` expectations rather than crypto bridge regressions. +--- +## [2026-03-25 04:10 PDT] - US-007 +- Audited the remaining crypto and webcrypto conformance expectations, replaced stale placeholder reasons with specific reproducible failure causes, and kept the vacuous-skip / requires-v8-flags classifications aligned with the current runner behavior. +- Regenerated the node conformance JSON/docs outputs after the expectation cleanup and marked the crypto cleanup story complete in the PRD. +- Files changed: `packages/secure-exec/tests/node-conformance/expectations.json`, `scripts/ralph/prd.json`, `scripts/ralph/progress.txt` +- **Learnings for future iterations:** + - Several remaining crypto failures are currently fixture-VFS gaps (`/test/fixtures/*.pem`, `.crt`, `.spkac`), so expectation reasons should name the missing fixture path instead of blaming a generic bridge gap. + - The vendored runner is the quickest way to verify expectation accuracy: removing one entry and rerunning the exact file gives a concrete stderr snippet that can be copied into the reason. + - Module-overlay drift still shows up in conformance as missing APIs like `require('crypto').randomUUID` and `crypto.hkdf`, even when related globals or bridge handlers exist elsewhere. +--- +## [2026-03-25 04:26 PDT] - US-008 +- Implemented the standalone bootstrap fix by removing the last dist-broken bare `require` paths from both the isolate bootstrap and the host bridge handlers, so `NodeRuntime.exec()` and kernel-managed `node` processes work from the built `packages/secure-exec/dist/index.js` entrypoint. +- Added a dist-based regression smoke test that spawns a real host `node --input-type=module` process, verifies `runtime.exec()` emits `hello` and supports `require("node:fs")`, and verifies `kernel.spawn("node", ["-e", "console.log(1)"])` captures stdout. +- Files changed: `packages/core/isolate-runtime/src/inject/require-setup.ts`, `packages/core/src/generated/isolate-runtime.ts`, `packages/nodejs/src/bridge-handlers.ts`, `packages/secure-exec/tests/runtime-driver/node/standalone-dist-smoke.test.ts`, `scripts/ralph/prd.json`, `scripts/ralph/progress.txt` +- **Learnings for future iterations:** + - Standalone `dist/` verification has to exercise the built package from a real host `node` process; vitest-importing source files can hide ESM-only failures in host bridge code. + - `packages/nodejs/src/bridge-handlers.ts` runs on the host as ESM, so any leftover `require("@secure-exec/core")` calls will only fail once the published-style `dist/` path is used. + - When a spawned verification script only needs to report pass/fail state, explicitly flushing stdout and exiting keeps the smoke test focused on bootstrap behavior instead of shared-runtime process lifetime. +--- +## [2026-03-25 04:31 PDT] - US-009 +- Verified the underlying `runtime.run()` export capture issue is already fixed on the current branch and added standalone `dist/` regression coverage for CommonJS object/scalar/nested `module.exports` plus ESM named exports. +- Files changed: `packages/secure-exec/tests/runtime-driver/node/standalone-dist-smoke.test.ts`, `scripts/ralph/prd.json`, `scripts/ralph/progress.txt` +- **Learnings for future iterations:** + - `US-008`'s standalone bootstrap fix also restored `runtime.run()` export capture from `packages/secure-exec/dist/index.js`; the missing piece was regression coverage, not another runtime patch. + - A standalone smoke that validates multiple `run()` calls should flush stdout through the write callback and exit explicitly after `runtime.terminate()` to keep Vitest from timing out on a lingering host process. + - This test area is best exercised against built `dist/` imports, because source-level Vitest coverage already passes and does not prove publish-style behavior. +--- +## [2026-03-25 04:59 PDT] - US-010 +- Implemented a real `http.Agent` pool in the bridge with Node-style `getName()`, `requests`/`sockets`/`freeSockets` bookkeeping, `_http_agent` module exposure, destroyed-socket replacement, `maxTotalSockets` enforcement via idle-socket eviction, callable `http.Server(...)`, and idle free-socket timeout cleanup. +- Added vendored `common/countdown.js`, added runtime-driver coverage for `_http_agent` aliasing and destroyed keepalive sockets, removed stale conformance expectations for newly passing HTTP agent files, and regenerated the node conformance report. +- Files changed: `.agent/contracts/node-bridge.md`, `packages/nodejs/src/bridge/network.ts`, `packages/core/isolate-runtime/src/inject/require-setup.ts`, `packages/core/src/generated/isolate-runtime.ts`, `packages/core/src/generated/polyfills.ts`, `packages/secure-exec/tests/runtime-driver/node/index.test.ts`, `packages/secure-exec/tests/node-conformance/common/countdown.js`, `packages/secure-exec/tests/node-conformance/expectations.json`, `packages/secure-exec/tests/node-conformance/conformance-report.json`, `docs/nodejs-conformance-report.mdx`, `scripts/ralph/progress.txt` +- **Learnings for future iterations:** + - Vendored HTTP conformance tests inspect internal agent state directly, so the bridge has to preserve `request.reusedSocket`, listener counts, and visible pool objects, not just user-facing request success. + - Loopback server responses in the bridge are serialized as base64; `IncomingMessage` must honor the serialized `bodyEncoding`, not only an `x-body-encoding` header, or higher-level HTTP tests read encoded text and fail in misleading ways. + - `test-http-agent-maxsockets-respected.js` still times out under the full conformance runner even after the pool fixes; the remaining gap looks runner/teardown-related rather than a missing `maxSockets` queue itself, so this story should stay open until that vendored file is green. ---