-
Notifications
You must be signed in to change notification settings - Fork 517
add tests for stream standard #5950
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #5950 +/- ##
=======================================
Coverage 69.67% 69.67%
=======================================
Files 397 397
Lines 105999 105999
Branches 17965 17965
=======================================
Hits 73853 73853
- Misses 21351 21354 +3
+ Partials 10795 10792 -3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
CodSpeed Performance ReportMerging this PR will degrade performance by 11.48%Comparing Summary
Performance Changes
Footnotes
|
| // First read gets the data | ||
| const result1 = await reader.read(); | ||
| strictEqual(result1.value, 'data'); | ||
| strictEqual(result1.done, false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be documented that this is actually incorrect behavior that we should likely fix. The expectation in this case (as evidenced in other runtimes) is that the first read should reject with an error. The read should NOT actually be fulfilled.
| // Cancel should handle pending state | ||
| await rs.cancel('cancel reason'); | ||
|
|
||
| // The read should complete (with done=true due to close) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per the spec, the read should be canceled when the lock is released. This is another deviation from the spec we should fix.
|
|
||
| strictEqual(result.done, true); | ||
| ok(result.value instanceof Uint8Array); | ||
| strictEqual(result.value.byteLength, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should likely also verify that it's the same backing store as the input.
| write() { | ||
| return new Promise((resolve) => { | ||
| resolveWrite = resolve; | ||
| }); | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| write() { | |
| return new Promise((resolve) => { | |
| resolveWrite = resolve; | |
| }); | |
| }, | |
| write() { | |
| const { promise, resolve } = Promise.withResolvers(); | |
| resolveWrite = resolve; | |
| return promise; | |
| }, |
| resolveWrite(); | ||
| await writePromise; | ||
| resolveWrite(); | ||
| await writePromise2; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should check that desiredSize returns back to 1
| controller = c; | ||
| }, | ||
| pull(c) { | ||
| // Respond with less data than requested to keep BYOB request alive |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does the "keep BYOB request alive" part here mean? Calling respond(1) fulfills the request. There's nothing "kept alive"
| }; | ||
|
|
||
| // Test consumerCount edge case (lines 954-967) | ||
| // This is triggered when checking consumer count during certain operations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you describe the "certain operations" more here. It's not clear exactly what edge case this is testing.
| }); | ||
|
|
||
| // Cancel should resolve immediately for closed stream | ||
| await rs.cancel(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not clear what exactly this is testing. There are no assertions and there's no indication of what the failure case would be.
| c.enqueue(new TextEncoder().encode('hello')); | ||
| c.enqueue(new Uint8Array(0)); // Empty chunk - should be skipped | ||
| c.enqueue(new TextEncoder().encode(' world')); | ||
| c.close(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's really no way of verifying in this test that the empty chunk was actually skipped.
| async test() { | ||
| // Create a stream that returns a non-byte value | ||
| // This is tricky because workerd streams are typically byte-oriented | ||
| // We need to use internal APIs or specific conditions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment does not make sense. You're returning byte data. You're not creating "a stream that returns a non-byte value"
| const reader = rs.getReader({ mode: 'byob' }); | ||
|
|
||
| const r1 = await reader.read(new Uint8Array(5)); | ||
| strictEqual(r1.value[0], 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These should be checking the lengths of value[0] also
| let pullCount = 0; | ||
| const rs = new ReadableStream({ | ||
| type: 'bytes', | ||
| // No autoAllocateChunkSize - pull must use enqueue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our implementation, by default, always sets autoAllocateChunkSize. There's a new compat flag to enable more standard behavior. Specifically, without that compat flag this comment is incorrect.
Tests certain edge cases which are not covered in standard.c++