Skip to content

Conversation

@TooTallNate
Copy link
Member

@TooTallNate TooTallNate commented Dec 22, 2025

Fix stream serialization to resolve when users release locks instead of waiting for streams to close, preventing Vercel functions from hanging.

What changed?

  • Implemented a polling mechanism to detect when stream locks are released
  • Added flushablePipe function that resolves in two scenarios:
    1. When the stream completes normally (close/error)
    2. When the user releases their lock AND all pending writes are flushed
  • Created a state tracking system to monitor pending operations and lock status

How to test?

  1. Create a workflow step that incrementally writes to a stream:

    const writer = stream.getWriter();
    await writer.write(data1);
    writer.releaseLock(); // Step should complete here without waiting
  2. Verify the step completes immediately after lock release rather than hanging

  3. Run the new test cases that verify both lock release and normal stream closure behaviors

Why make this change?

This fixes an issue where Vercel functions would hang when users incrementally write to streams within steps. Previously, the system would wait for the stream to fully close before resolving, but many users follow a pattern where they write data and release the lock without explicitly closing the stream. This change allows steps to complete as soon as the user releases the lock and all pending writes are flushed, which is the expected behavior in most streaming scenarios.

…f waiting for stream to close

This prevents Vercel functions from hanging when users incrementally write to streams within steps (e.g., `await writer.write(data); writer.releaseLock()`). Uses a polling approach to detect when the stream lock is released and all pending writes are flushed.
@changeset-bot
Copy link

changeset-bot bot commented Dec 22, 2025

🦋 Changeset detected

Latest commit: a298e32

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 12 packages
Name Type
@workflow/core Patch
@workflow/builders Patch
@workflow/cli Patch
@workflow/next Patch
@workflow/nitro Patch
@workflow/web-shared Patch
workflow Patch
@workflow/astro Patch
@workflow/sveltekit Patch
@workflow/world-testing Patch
@workflow/nuxt Patch
@workflow/ai Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@github-actions
Copy link
Contributor

github-actions bot commented Dec 22, 2025

🧪 E2E Test Results

Some tests failed

Summary

Passed Failed Skipped Total
✅ ▲ Vercel Production 286 0 11 297
✅ 💻 Local Development 262 0 8 270
✅ 📦 Local Production 262 0 8 270
✅ 🐘 Local Postgres 262 0 8 270
✅ 🪟 Windows 27 0 0 27
❌ 🌍 Community Worlds 109 11 0 120
Total 1208 11 35 1254

❌ Failed Tests

🌍 Community Worlds (11 failed)

mongodb (1 failed):

  • webhookWorkflow

redis (1 failed):

  • webhookWorkflow

starter (8 failed):

  • addTenWorkflow
  • addTenWorkflow
  • retryAttemptCounterWorkflow
  • crossFileErrorWorkflow - stack traces work across imported modules
  • hookCleanupTestWorkflow - hook token reuse after workflow completion
  • stepFunctionPassingWorkflow - step function references can be passed as arguments (without closure vars)
  • stepFunctionWithClosureWorkflow - step function with closure variables passed as argument
  • spawnWorkflowFromStepWorkflow - spawning a child workflow using start() inside a step

turso (1 failed):

  • webhookWorkflow

Details by Category

✅ ▲ Vercel Production
App Passed Failed Skipped
✅ astro 26 0 1
✅ example 26 0 1
✅ express 26 0 1
✅ fastify 26 0 1
✅ hono 26 0 1
✅ nextjs-turbopack 26 0 1
✅ nextjs-webpack 26 0 1
✅ nitro 26 0 1
✅ nuxt 26 0 1
✅ sveltekit 26 0 1
✅ vite 26 0 1
✅ 💻 Local Development
App Passed Failed Skipped
✅ astro-stable 26 0 1
✅ express-stable 26 0 1
✅ fastify-stable 26 0 1
✅ hono-stable 26 0 1
✅ nextjs-turbopack-stable 27 0 0
✅ nextjs-webpack-stable 27 0 0
✅ nitro-stable 26 0 1
✅ nuxt-stable 26 0 1
✅ sveltekit-stable 26 0 1
✅ vite-stable 26 0 1
✅ 📦 Local Production
App Passed Failed Skipped
✅ astro-stable 26 0 1
✅ express-stable 26 0 1
✅ fastify-stable 26 0 1
✅ hono-stable 26 0 1
✅ nextjs-turbopack-stable 27 0 0
✅ nextjs-webpack-stable 27 0 0
✅ nitro-stable 26 0 1
✅ nuxt-stable 26 0 1
✅ sveltekit-stable 26 0 1
✅ vite-stable 26 0 1
✅ 🐘 Local Postgres
App Passed Failed Skipped
✅ astro-stable 26 0 1
✅ express-stable 26 0 1
✅ fastify-stable 26 0 1
✅ hono-stable 26 0 1
✅ nextjs-turbopack-stable 27 0 0
✅ nextjs-webpack-stable 27 0 0
✅ nitro-stable 26 0 1
✅ nuxt-stable 26 0 1
✅ sveltekit-stable 26 0 1
✅ vite-stable 26 0 1
✅ 🪟 Windows
App Passed Failed Skipped
✅ nextjs-turbopack 27 0 0
❌ 🌍 Community Worlds
App Passed Failed Skipped
✅ mongodb-dev 3 0 0
❌ mongodb 26 1 0
✅ redis-dev 3 0 0
❌ redis 26 1 0
✅ starter-dev 3 0 0
❌ starter 19 8 0
✅ turso-dev 3 0 0
❌ turso 26 1 0

📋 View full workflow run

@vercel
Copy link
Contributor

vercel bot commented Dec 22, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
example-nextjs-workflow-turbopack Ready Ready Preview, Comment Dec 22, 2025 11:48pm
example-nextjs-workflow-webpack Ready Ready Preview, Comment Dec 22, 2025 11:48pm
example-workflow Ready Ready Preview, Comment Dec 22, 2025 11:48pm
workbench-astro-workflow Ready Ready Preview, Comment Dec 22, 2025 11:48pm
workbench-express-workflow Ready Ready Preview, Comment Dec 22, 2025 11:48pm
workbench-fastify-workflow Ready Ready Preview, Comment Dec 22, 2025 11:48pm
workbench-hono-workflow Ready Ready Preview, Comment Dec 22, 2025 11:48pm
workbench-nitro-workflow Ready Ready Preview, Comment Dec 22, 2025 11:48pm
workbench-nuxt-workflow Ready Ready Preview, Comment Dec 22, 2025 11:48pm
workbench-sveltekit-workflow Ready Ready Preview, Comment Dec 22, 2025 11:48pm
workbench-vite-workflow Ready Ready Preview, Comment Dec 22, 2025 11:48pm
workflow-docs Ready Ready Preview, Comment Dec 22, 2025 11:48pm

@github-actions
Copy link
Contributor

github-actions bot commented Dec 22, 2025

📊 Benchmark Results

📈 Comparing against baseline from main branch. Green 🟢 = faster, Red 🔺 = slower.

workflow with no steps

💻 Local Development

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
🌐 Starter 🥇 Next.js (Turbopack) 0.030s (-23.1% 🟢) 1.012s (~) 0.983s 10 1.00x
💻 Local Next.js (Turbopack) 0.041s (~) 1.018s (~) 0.977s 10 1.38x
🌐 Redis Next.js (Turbopack) 0.041s (-1.4%) 1.016s (~) 0.975s 10 1.39x
💻 Local Nitro 0.043s (+2.9%) 1.006s (~) 0.963s 10 1.46x
💻 Local Express 0.044s (+1.6%) 1.007s (~) 0.963s 10 1.48x
🌐 Turso Next.js (Turbopack) 0.097s (-2.0%) 1.014s (~) 0.916s 10 3.29x
🌐 MongoDB Next.js (Turbopack) 0.101s (+37.2% 🔺) 1.015s (~) 0.914s 10 3.42x
🐘 Postgres Express 0.282s (-11.2% 🟢) 1.016s (~) 0.734s 10 9.52x
🐘 Postgres Next.js (Turbopack) 0.283s (+73.4% 🔺) 1.019s (~) 0.736s 10 9.55x
🐘 Postgres Nitro 0.335s (+11.1% 🔺) 1.012s (~) 0.677s 10 11.31x

▲ Production (Vercel)

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Nitro 0.532s (+1.0%) 1.439s (-27.4% 🟢) 0.906s 10 1.00x
▲ Vercel Express 0.534s (-12.6% 🟢) 1.642s (+12.4% 🔺) 1.108s 10 1.00x
▲ Vercel Next.js (Turbopack) 0.580s (-8.5% 🟢) 1.453s (-8.2% 🟢) 0.873s 10 1.09x

🔍 Observability: Nitro | Express | Next.js (Turbopack)

workflow with 1 step

💻 Local Development

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
🌐 Starter 🥇 Next.js (Turbopack) 1.071s (-1.6%) 2.009s (~) 0.938s 10 1.00x
💻 Local Next.js (Turbopack) 1.102s (+0.8%) 2.012s (~) 0.910s 10 1.03x
🌐 Redis Next.js (Turbopack) 1.105s (+1.2%) 2.012s (~) 0.907s 10 1.03x
💻 Local Express 1.113s (~) 2.007s (~) 0.894s 10 1.04x
💻 Local Nitro 1.113s (~) 2.006s (~) 0.893s 10 1.04x
🌐 Turso Next.js (Turbopack) 1.303s (~) 2.013s (~) 0.710s 10 1.22x
🌐 MongoDB Next.js (Turbopack) 1.315s (+1.3%) 2.014s (~) 0.700s 10 1.23x
🐘 Postgres Next.js (Turbopack) 1.851s (-3.6%) 2.016s (~) 0.165s 10 1.73x
🐘 Postgres Express 2.169s (~) 3.017s (~) 0.849s 10 2.02x
🐘 Postgres Nitro 2.201s (+2.7%) 3.012s (~) 0.811s 10 2.05x

▲ Production (Vercel)

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Express 2.569s (-3.8%) 3.496s (-1.4%) 0.927s 10 1.00x
▲ Vercel Nitro 2.642s (-3.4%) 3.489s (-5.6% 🟢) 0.848s 10 1.03x
▲ Vercel Next.js (Turbopack) 2.643s (-1.7%) 3.589s (-3.0%) 0.947s 10 1.03x

🔍 Observability: Express | Nitro | Next.js (Turbopack)

workflow with 10 sequential steps

💻 Local Development

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
🌐 Starter 🥇 Next.js (Turbopack) 10.461s (-1.2%) 11.010s (~) 0.549s 5 1.00x
🌐 Redis Next.js (Turbopack) 10.677s (~) 11.016s (~) 0.339s 5 1.02x
💻 Local Next.js (Turbopack) 10.680s (~) 11.018s (~) 0.338s 5 1.02x
💻 Local Express 10.787s (~) 11.015s (~) 0.227s 5 1.03x
💻 Local Nitro 10.799s (~) 11.013s (~) 0.214s 5 1.03x
🌐 Turso Next.js (Turbopack) 12.239s (+0.5%) 13.026s (~) 0.787s 5 1.17x
🌐 MongoDB Next.js (Turbopack) 12.244s (~) 13.024s (~) 0.780s 5 1.17x
🐘 Postgres Next.js (Turbopack) 15.275s (~) 16.040s (~) 0.765s 5 1.46x
🐘 Postgres Nitro 19.833s (-2.6%) 20.433s (-2.0%) 0.600s 5 1.90x
🐘 Postgres Express 20.473s (~) 21.048s (~) 0.575s 5 1.96x

▲ Production (Vercel)

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Next.js (Turbopack) 20.841s (-4.4%) 21.510s (-4.6%) 0.669s 5 1.00x
▲ Vercel Express 21.081s (-2.5%) 21.846s (-2.0%) 0.765s 5 1.01x
▲ Vercel Nitro 21.167s (-1.9%) 21.756s (-3.0%) 0.589s 5 1.02x

🔍 Observability: Next.js (Turbopack) | Express | Nitro

Promise.all with 10 concurrent steps

💻 Local Development

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
🌐 Starter 🥇 Next.js (Turbopack) 1.304s (-2.7%) 2.008s (~) 0.704s 15 1.00x
🌐 Redis Next.js (Turbopack) 1.349s (-0.6%) 2.010s (~) 0.661s 15 1.03x
💻 Local Next.js (Turbopack) 1.391s (~) 2.013s (~) 0.622s 15 1.07x
💻 Local Express 1.401s (-0.7%) 2.006s (~) 0.604s 15 1.07x
💻 Local Nitro 1.412s (~) 2.005s (~) 0.594s 15 1.08x
🐘 Postgres Next.js (Turbopack) 1.730s (-6.2% 🟢) 2.014s (-2.6%) 0.284s 15 1.33x
🌐 MongoDB Next.js (Turbopack) 2.125s (~) 3.013s (~) 0.888s 10 1.63x
🌐 Turso Next.js (Turbopack) 2.146s (-2.2%) 3.013s (~) 0.868s 10 1.65x
🐘 Postgres Express 2.414s (+1.2%) 3.016s (~) 0.602s 10 1.85x
🐘 Postgres Nitro 2.419s (+2.9%) 3.010s (~) 0.591s 10 1.86x

▲ Production (Vercel)

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Express 2.566s (-6.2% 🟢) 3.584s (+1.9%) 1.018s 9 1.00x
▲ Vercel Nitro 2.654s (-2.2%) 3.560s (-6.6% 🟢) 0.906s 9 1.03x
▲ Vercel Next.js (Turbopack) 2.831s (-7.2% 🟢) 3.709s (-6.8% 🟢) 0.878s 9 1.10x

🔍 Observability: Express | Nitro | Next.js (Turbopack)

Promise.all with 25 concurrent steps

💻 Local Development

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
💻 Local 🥇 Nitro 2.190s (-1.4%) 3.153s (~) 0.963s 10 1.00x
💻 Local Next.js (Turbopack) 2.204s (+1.9%) 3.109s (~) 0.905s 10 1.01x
💻 Local Express 2.215s (~) 3.167s (~) 0.952s 10 1.01x
🌐 Starter Next.js (Turbopack) 2.365s (-3.6%) 3.009s (~) 0.644s 10 1.08x
🐘 Postgres Next.js (Turbopack) 2.439s (-7.4% 🟢) 3.024s (~) 0.584s 10 1.11x
🌐 Redis Next.js (Turbopack) 2.473s (-0.8%) 3.011s (~) 0.537s 10 1.13x
🐘 Postgres Nitro 2.877s (+8.4% 🔺) 3.111s (+2.7%) 0.234s 10 1.31x
🐘 Postgres Express 3.119s (+6.8% 🔺) 3.912s (+20.8% 🔺) 0.793s 8 1.42x
🌐 Turso Next.js (Turbopack) 4.662s (~) 5.184s (~) 0.522s 6 2.13x
🌐 MongoDB Next.js (Turbopack) 4.761s (~) 5.180s (~) 0.419s 6 2.17x

▲ Production (Vercel)

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Next.js (Turbopack) 3.281s (+3.3%) 3.753s (-1.7%) 0.471s 8 1.00x
▲ Vercel Express 3.388s (+11.3% 🔺) 4.035s (+9.5% 🔺) 0.647s 8 1.03x
▲ Vercel Nitro 3.513s (+13.1% 🔺) 4.140s (+8.0% 🔺) 0.628s 8 1.07x

🔍 Observability: Next.js (Turbopack) | Express | Nitro

Promise.race with 10 concurrent steps

💻 Local Development

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
🌐 Starter 🥇 Next.js (Turbopack) 1.322s (-3.7%) 2.008s (~) 0.685s 15 1.00x
🌐 Redis Next.js (Turbopack) 1.365s (-1.6%) 2.011s (~) 0.645s 15 1.03x
💻 Local Next.js (Turbopack) 1.409s (+0.8%) 2.015s (~) 0.606s 15 1.07x
💻 Local Express 1.410s (~) 2.006s (~) 0.596s 15 1.07x
💻 Local Nitro 1.431s (~) 2.034s (+1.4%) 0.603s 15 1.08x
🐘 Postgres Next.js (Turbopack) 1.599s (-1.7%) 2.014s (~) 0.415s 15 1.21x
🐘 Postgres Nitro 1.676s (-10.6% 🟢) 2.011s (~) 0.335s 15 1.27x
🐘 Postgres Express 1.769s (~) 2.088s (+3.9%) 0.319s 15 1.34x
🌐 MongoDB Next.js (Turbopack) 2.128s (~) 3.015s (~) 0.887s 10 1.61x
🌐 Turso Next.js (Turbopack) 2.174s (-2.0%) 3.014s (~) 0.841s 10 1.64x

▲ Production (Vercel)

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Nitro 2.558s (-4.3%) 3.568s (-5.5% 🟢) 1.010s 9 1.00x
▲ Vercel Express 2.622s (-3.2%) 3.595s (+2.5%) 0.973s 9 1.03x
▲ Vercel Next.js (Turbopack) 2.855s (+7.3% 🔺) 3.680s (~) 0.825s 9 1.12x

🔍 Observability: Nitro | Express | Next.js (Turbopack)

Promise.race with 25 concurrent steps

💻 Local Development

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
💻 Local 🥇 Nitro 2.194s (-8.2% 🟢) 3.132s (-6.0% 🟢) 0.938s 10 1.00x
💻 Local Express 2.209s (-1.6%) 3.173s (~) 0.964s 10 1.01x
💻 Local Next.js (Turbopack) 2.303s (~) 3.240s (~) 0.937s 10 1.05x
🌐 Starter Next.js (Turbopack) 2.388s (-2.7%) 3.008s (~) 0.619s 10 1.09x
🐘 Postgres Next.js (Turbopack) 2.407s (-9.4% 🟢) 3.021s (~) 0.614s 10 1.10x
🌐 Redis Next.js (Turbopack) 2.446s (-1.7%) 3.010s (~) 0.565s 10 1.11x
🐘 Postgres Nitro 2.692s (+14.9% 🔺) 3.020s (~) 0.328s 10 1.23x
🐘 Postgres Express 3.004s (+10.3% 🔺) 3.358s (+11.0% 🔺) 0.354s 9 1.37x
🌐 Turso Next.js (Turbopack) 4.726s (-0.9%) 5.184s (~) 0.459s 6 2.15x
🌐 MongoDB Next.js (Turbopack) 4.806s (+2.1%) 5.181s (~) 0.375s 6 2.19x

▲ Production (Vercel)

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Nitro 2.978s (~) 3.511s (-4.6%) 0.533s 9 1.00x
▲ Vercel Next.js (Turbopack) 3.128s (-2.2%) 3.692s (-4.8%) 0.564s 9 1.05x
▲ Vercel Express 3.344s (+3.8%) 4.054s (+6.6% 🔺) 0.711s 8 1.12x

🔍 Observability: Nitro | Next.js (Turbopack) | Express

Stream Benchmarks (includes TTFB metrics)
workflow with stream

💻 Local Development

World Framework Workflow Time TTFB Slurp Wall Time Overhead Samples vs Fastest
🌐 Starter 🥇 Next.js (Turbopack) 0.093s (-28.3% 🟢) 1.005s (~) 0.000s (+Infinity% 🔺) 1.011s (~) 0.918s 10 1.00x
💻 Local Next.js (Turbopack) 0.140s (-4.4%) 1.003s (~) 0.016s (-0.6%) 1.027s (~) 0.887s 10 1.51x
🌐 Redis Next.js (Turbopack) 0.144s (+0.8%) 1.004s (~) 0.000s (+Infinity% 🔺) 1.012s (~) 0.868s 10 1.55x
💻 Local Express 0.178s (+1.2%) 0.993s (~) 0.017s (+8.2% 🔺) 1.024s (~) 0.846s 10 1.92x
💻 Local Nitro 0.178s (+1.6%) 0.992s (~) 0.014s (-10.1% 🟢) 1.020s (~) 0.842s 10 1.92x
🌐 MongoDB Next.js (Turbopack) 0.464s (-9.5% 🟢) 0.985s (+5.3% 🔺) 0.000s (+Infinity% 🔺) 1.013s (~) 0.549s 10 5.01x
🌐 Turso Next.js (Turbopack) 0.496s (+1.2%) 0.947s (-1.5%) 0.000s (-100.0% 🟢) 1.013s (~) 0.517s 10 5.35x
🐘 Postgres Nitro 1.353s (-40.4% 🟢) 1.688s (-39.0% 🟢) 0.000s (-100.0% 🟢) 2.011s (-33.3% 🟢) 0.658s 10 14.59x
🐘 Postgres Next.js (Turbopack) 1.394s (+17.7% 🔺) 1.777s (+4.0%) 0.000s (NaN%) 2.018s (+10.9% 🔺) 0.624s 10 15.04x
🐘 Postgres Express 2.245s (-3.1%) 2.806s (+2.9%) 0.000s (~) 3.017s (~) 0.772s 10 24.22x

▲ Production (Vercel)

World Framework Workflow Time TTFB Slurp Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Next.js (Turbopack) 2.478s (-5.8% 🟢) 3.403s (+6.5% 🔺) 0.673s (+17.3% 🔺) 4.441s (+5.9% 🔺) 1.964s 10 1.00x
▲ Vercel Express 2.709s (-8.8% 🟢) 3.236s (~) 0.516s (-27.9% 🟢) 4.250s (-5.4% 🟢) 1.542s 10 1.09x
▲ Vercel Nitro 2.716s (+2.6%) 3.288s (+2.1%) 0.453s (-32.4% 🟢) 4.151s (-3.6%) 1.435s 10 1.10x

🔍 Observability: Next.js (Turbopack) | Express | Nitro

Summary

Fastest Framework by World

Winner determined by most benchmark wins

World 🥇 Fastest Framework Wins
💻 Local Next.js (Turbopack) 6/8
🐘 Postgres Next.js (Turbopack) 6/8
▲ Vercel Nitro 3/8
Fastest World by Framework

Winner determined by most benchmark wins

Framework 🥇 Fastest World Wins
Express 💻 Local 8/8
Next.js (Turbopack) 🌐 Starter 6/8
Nitro 💻 Local 8/8
Column Definitions
  • Workflow Time: Runtime reported by workflow (completedAt - createdAt) - primary metric
  • TTFB: Time to First Byte - time from workflow start until first stream byte received (stream benchmarks only)
  • Slurp: Time from first byte to complete stream consumption (stream benchmarks only)
  • Wall Time: Total testbench time (trigger workflow + poll for result)
  • Overhead: Testbench overhead (Wall Time - Workflow Time)
  • Samples: Number of benchmark iterations run
  • vs Fastest: How much slower compared to the fastest configuration for this benchmark

Worlds:

  • 💻 Local: In-memory filesystem world (local development)
  • 🐘 Postgres: PostgreSQL database world (local development)
  • ▲ Vercel: Vercel production/preview deployment
  • 🌐 Starter: Community world (local development)
  • 🌐 Turso: Community world (local development)
  • 🌐 MongoDB: Community world (local development)
  • 🌐 Redis: Community world (local development)
  • 🌐 Jazz: Community world (local development)

📋 View full workflow run

Copy link
Member Author

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes an issue where Vercel serverless functions would hang indefinitely when users write to streams and release locks without explicitly closing the stream. The solution implements a polling mechanism to detect lock releases and resolve step operations early.

Key Changes:

  • Introduced flushablePipe function that resolves when either the stream closes naturally OR when the user releases their lock and all pending writes are flushed
  • Implemented polling functions (pollReadableLock, pollWritableLock) that check every 100ms if stream locks have been released
  • Updated stream serialization in both getExternalRevivers and getStepRevivers to use the new flushable pipe mechanism instead of standard pipeTo()

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 10 comments.

Show a summary per file
File Description
packages/core/src/flushable-stream.ts New module implementing the flushable stream mechanism with state tracking, lock polling, and custom pipe function
packages/core/src/flushable-stream.test.ts Test suite covering lock release and normal stream closure scenarios
packages/core/src/serialization.ts Updated ReadableStream and WritableStream revivers to use flushable pipe with lock polling
docs/content/docs/foundations/streaming.mdx Enhanced documentation about stream lock contracts and best practices
.changeset/stream-lock-polling.md Changeset describing the patch

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +112 to +125
const intervalId = setInterval(() => {
// Stop polling if already resolved or stream ended
if (state.doneResolved || state.streamEnded) {
clearInterval(intervalId);
return;
}

// Check if lock is released (not closed) and no pending ops
if (isReadableUnlockedNotClosed(readable) && state.pendingOps === 0) {
state.doneResolved = true;
state.resolve();
clearInterval(intervalId);
}
}, LOCK_POLL_INTERVAL_MS);
Copy link

Copilot AI Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to pollWritableLock, this interval is never stored or returned, creating potential issues if this function is called multiple times on the same stream. Multiple simultaneous polling operations could race to resolve the same state. Consider returning the intervalId for explicit cleanup or adding protection against concurrent polling.

Copilot uses AI. Check for mistakes.
Comment on lines +156 to +176
const readResult = await reader.read();

if (readResult.done) {
// Source stream completed - close sink and resolve
state.streamEnded = true;
await writer.close();
// Resolve done promise if not already resolved
if (!state.doneResolved) {
state.doneResolved = true;
state.resolve();
}
return;
}

// Count write as a pending op - this is what we need to flush
state.pendingOps++;
try {
await writer.write(readResult.value);
} finally {
state.pendingOps--;
}
Copy link

Copilot AI Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a race condition here: after reading from the source, the stream could be ended (via error or another path) before the write begins, but state.streamEnded is only checked after the write completes. This means we might attempt to write to a stream that should be terminated. Consider checking state.streamEnded immediately after the read, before incrementing pendingOps and writing.

Copilot uses AI. Check for mistakes.
Comment on lines +16 to +17
* The pump continues running even after `doneResolved=true` to handle
* any future writes if the user acquires a new lock.
Copy link

Copilot AI Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment states "The pump continues running even after doneResolved=true to handle any future writes if the user acquires a new lock." However, looking at the flushablePipe implementation, there's no mechanism to actually handle or support re-acquiring locks after the promise resolves. Once doneResolved is true, the polling stops, and if the user were to acquire a new lock and write more data, that data would continue to be pumped through, but there's no way to signal completion again. This comment is misleading - either the implementation should support this pattern, or the comment should be updated to clarify that re-acquiring locks after release is not a supported use case.

Suggested change
* The pump continues running even after `doneResolved=true` to handle
* any future writes if the user acquires a new lock.
* Once `doneResolved` is set to true, the `done` promise will not resolve
* again. Re-acquiring locks after release is not supported as a way to
* trigger additional completion signaling.

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,194 @@
import { type PromiseWithResolvers, withResolvers } from '@workflow/utils';

/** Polling interval for lock release detection */
Copy link

Copilot AI Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The polling interval is set to 100ms (LOCK_POLL_INTERVAL_MS), which means there could be up to a 100ms delay between when a user releases a lock and when the system detects it. In serverless environments where execution time costs money, this polling approach could be wasteful. Consider if there's a more event-driven approach, or at least document why polling was chosen over alternatives. Additionally, with many concurrent streams, having multiple setInterval timers could impact performance.

Suggested change
/** Polling interval for lock release detection */
/**
* Polling interval (in ms) for lock release detection.
*
* The Web Streams API does not expose an event for "lock released but stream
* still open"; we can only distinguish that state by periodically attempting
* to acquire a reader/writer. For that reason we use polling instead of a
* fully event-driven approach here.
*
* 100ms is a compromise between:
* - Latency: how quickly we notice that the user has released their lock, and
* - Cost/CPU usage: how often timers fire, especially with many concurrent
* streams or in serverless environments where billed time matters.
*
* This value should only be changed with care, as decreasing it will
* increase polling frequency (and thus potential cost), while increasing it
* will add worst-case delay before the `done` promise resolves after a lock
* is released.
*/

Copilot uses AI. Check for mistakes.
Comment on lines +43 to +54
function isWritableUnlockedNotClosed(writable: WritableStream): boolean {
if (writable.locked) return false;

try {
// Try to acquire writer - if successful, stream is unlocked (not closed)
const writer = writable.getWriter();
writer.releaseLock();
return true;
} catch {
// getWriter() throws if stream is closed/errored - let pump handle it
return false;
}
Copy link

Copilot AI Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function acquires a writer lock temporarily to check if the stream is unlocked vs closed, but if an error occurs during releaseLock() (line 49), the error is silently caught and false is returned. While this is probably the desired behavior, it means the lock might remain acquired if releaseLock throws. Consider being more specific about which errors to catch, or add a comment explaining why errors during releaseLock should be treated as "stream is closed".

Copilot uses AI. Check for mistakes.
Comment on lines +63 to +71
try {
// Try to acquire reader - if successful, stream is unlocked (not closed)
const reader = readable.getReader();
reader.releaseLock();
return true;
} catch {
// getReader() throws if stream is closed/errored - let pump handle it
return false;
}
Copy link

Copilot AI Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to isWritableUnlockedNotClosed, this function temporarily acquires a reader lock to check stream state. If an error occurs during releaseLock() (line 66), it's silently caught. While probably intentional, this could leave the lock acquired if releaseLock throws. Consider being more explicit about error handling or documenting why all errors should be treated as "stream is closed".

Suggested change
try {
// Try to acquire reader - if successful, stream is unlocked (not closed)
const reader = readable.getReader();
reader.releaseLock();
return true;
} catch {
// getReader() throws if stream is closed/errored - let pump handle it
return false;
}
let reader: ReadableStreamDefaultReader | undefined;
try {
// Try to acquire reader - if successful, stream is unlocked (not closed)
reader = readable.getReader();
} catch {
// getReader() throws if stream is closed/errored - let pump handle it
return false;
}
try {
reader.releaseLock();
} catch {
// If releaseLock() throws for any reason, conservatively treat the
// stream as closed/errored so callers don't assume it's safe to use.
// The pump will observe the failure via the stream's end state.
return false;
}
return true;

Copilot uses AI. Check for mistakes.
Comment on lines +85 to +98
const intervalId = setInterval(() => {
// Stop polling if already resolved or stream ended
if (state.doneResolved || state.streamEnded) {
clearInterval(intervalId);
return;
}

// Check if lock is released (not closed) and no pending ops
if (isWritableUnlockedNotClosed(writable) && state.pendingOps === 0) {
state.doneResolved = true;
state.resolve();
clearInterval(intervalId);
}
}, LOCK_POLL_INTERVAL_MS);
Copy link

Copilot AI Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The interval created here is never stored or returned, which means there's no way to explicitly clean it up. While the interval does have cleanup logic inside the callback, there's a potential issue if pollWritableLock is called multiple times on the same stream - this would create multiple intervals that could race to resolve the same state. Consider returning the intervalId so callers can clean it up if needed, or add protection against multiple simultaneous polling operations on the same state.

Copilot uses AI. Check for mistakes.
Comment on lines +9 to +110
describe('flushable stream behavior', () => {
it('promise should resolve when writable stream lock is released (polling)', async () => {
// Test the pattern: user writes, releases lock, polling detects it, promise resolves
const chunks: string[] = [];
let streamClosed = false;

// Create a simple mock for the sink
const mockSink = new WritableStream<string>({
write(chunk) {
chunks.push(chunk);
},
close() {
streamClosed = true;
},
});

// Create a TransformStream like we do in getStepRevivers
const { readable, writable } = new TransformStream<string, string>();
const state = createFlushableState();

// Start piping in background
flushablePipe(readable, mockSink, state).catch(() => {
// Errors handled via state.reject
});

// Start polling for lock release
pollWritableLock(writable, state);

// Simulate user interaction - write and release lock
const userWriter = writable.getWriter();
await userWriter.write('chunk1');
await userWriter.write('chunk2');

// Release lock without closing stream
userWriter.releaseLock();

// Wait for pipe to process + polling interval
await new Promise((r) => setTimeout(r, LOCK_POLL_INTERVAL_MS + 50));

// The promise should resolve
await expect(
Promise.race([
state.promise,
new Promise((_, r) => setTimeout(() => r(new Error('timeout')), 400)),
])
).resolves.toBeUndefined();

// Chunks should have been written
expect(chunks).toContain('chunk1');
expect(chunks).toContain('chunk2');

// Stream should NOT be closed (user only released lock)
expect(streamClosed).toBe(false);
});

it('promise should resolve when writable stream closes naturally', async () => {
const chunks: string[] = [];
let streamClosed = false;

const mockSink = new WritableStream<string>({
write(chunk) {
chunks.push(chunk);
},
close() {
streamClosed = true;
},
});

const { readable, writable } = new TransformStream<string, string>();
const state = createFlushableState();

// Start piping in background
flushablePipe(readable, mockSink, state).catch(() => {
// Errors handled via state.reject
});

// Start polling (won't trigger since stream will close first)
pollWritableLock(writable, state);

// User writes and then closes the stream
const userWriter = writable.getWriter();
await userWriter.write('data');
await userWriter.close();

// Wait a tick for the pipe to process
await new Promise((r) => setTimeout(r, 50));

// The promise should resolve
await expect(
Promise.race([
state.promise,
new Promise((_, r) => setTimeout(() => r(new Error('timeout')), 200)),
])
).resolves.toBeUndefined();

// Chunks should have been written
expect(chunks).toContain('data');

// Stream should be closed (user closed it)
expect(streamClosed).toBe(true);
});
});
Copy link

Copilot AI Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test coverage is missing for several critical scenarios:

  1. Error handling during pipe operations (e.g., write failures)
  2. Testing with pollReadableLock (only pollWritableLock is tested)
  3. Concurrent operations (multiple reads/writes in flight)
  4. Edge cases like releasing and re-acquiring locks
  5. Scenarios where the stream ends while pending operations are in flight

These scenarios are important to verify the robustness of the flushable stream mechanism, especially given the complexity around lock polling and state management.

Copilot uses AI. Check for mistakes.
When writing to a stream in a step function, there is an important contract to understand:

<Callout type="warn">
**Once a lock is released, no further writes to that stream from that step are allowed.** The framework uses lock release as the signal that the step is done interacting with the stream. Make sure all writes are complete before releasing the lock.
Copy link

Copilot AI Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation states "Once a lock is released, no further writes to that stream from that step are allowed" and "The framework uses lock release as the signal that the step is done interacting with the stream." However, this behavior contradicts the comment in flushable-stream.ts line 16-17 which states "The pump continues running even after doneResolved=true to handle any future writes if the user acquires a new lock." This is confusing - can users re-acquire locks and write more data after releasing, or not? The documentation and code comments need to be aligned on this contract.

Suggested change
**Once a lock is released, no further writes to that stream from that step are allowed.** The framework uses lock release as the signal that the step is done interacting with the stream. Make sure all writes are complete before releasing the lock.
**Once a lock is released, no further writes to that stream from that step are allowed.** The framework uses lock release as the signal that the step is done interacting with the stream. Make sure all writes are complete before releasing the lock, and do not rely on re-acquiring a lock on the same stream within the same step after it has been released, even if internal implementation details might technically allow it.

Copilot uses AI. Check for mistakes.
if (!state.doneResolved) {
state.doneResolved = true;
state.reject(err);
}
Copy link

Copilot AI Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When an error occurs during piping, the error is caught, state.reject(err) is called, and then the error is re-thrown. However, the callers in serialization.ts catch this error with .catch(() => {}) and ignore it, relying on state.reject to propagate the error through the promise. This means the thrown error from line 189 is always silently caught and discarded. Consider either not re-throwing the error (since it's handled via state.reject), or documenting why the error is both rejected through state and re-thrown.

Suggested change
}
}
// Propagate error through flushablePipe's own promise as well.
// Callers that rely on the FlushableStreamState should use `state.promise`,
// while other callers may depend on this rejection. Some known callers
// explicitly ignore this rejection (`.catch(() => {})`) and rely solely
// on `state.reject(err)` for error handling.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants