Skip to content

Small update to set_max_stream_data#3382

Open
jesup wants to merge 18 commits intomozilla:mainfrom
jesup:push-otuuxuslzpln
Open

Small update to set_max_stream_data#3382
jesup wants to merge 18 commits intomozilla:mainfrom
jesup:push-otuuxuslzpln

Conversation

@jesup
Copy link
Member

@jesup jesup commented Feb 3, 2026

patch for Issue 3381

Copilot AI review requested due to automatic review settings February 3, 2026 19:13
@jesup jesup requested a review from mxinden as a code owner February 3, 2026 19:13
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements support for WebTransport send groups and related features to improve stream prioritization and session management. The changes enable inter-group bandwidth fairness while maintaining sendorder-based prioritization within groups.

Changes:

  • Added send group infrastructure for organizing WebTransport streams with fair bandwidth allocation across groups
  • Implemented TLS keying material export functionality for application-level security protocols
  • Enhanced session management with draining state notification, protocol negotiation, and statistics tracking

Reviewed changes

Copilot reviewed 26 out of 26 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
neqo-transport/src/streams.rs Core send group scheduling with round-robin fairness and new remote max streams setters
neqo-transport/src/send_stream.rs Send group storage, accessors, and integration with existing stream ordering
neqo-transport/src/connection/tests/stream.rs Test coverage for send group scheduling behavior
neqo-transport/src/connection/mod.rs Connection-level send group API, keying material export, and PMTUD improvements
neqo-http3/src/server_events.rs Return type fixes for datagram sending methods
neqo-http3/src/send_message.rs Added as_any_mut for downcasting support
neqo-http3/src/lib.rs Stream trait extensions for send groups and session protocol
neqo-http3/src/features/extended_connect/webtransport_streams.rs Send group management for WebTransport streams
neqo-http3/src/features/extended_connect/webtransport_session.rs Session-level send groups, stats tracking, and datagram queue
neqo-http3/src/features/extended_connect/tests/webtransport/streams.rs Comprehensive send group API tests
neqo-http3/src/features/extended_connect/tests/webtransport/sessions.rs Session protocol, stats, and send group validation tests
neqo-http3/src/features/extended_connect/tests/webtransport/mod.rs Test helper fix for datagram return value
neqo-http3/src/features/extended_connect/stats.rs WebTransport session statistics structure
neqo-http3/src/features/extended_connect/session.rs Protocol extraction, datagram queue processing, and stats
neqo-http3/src/features/extended_connect/send_group.rs Send group ID and metadata structures
neqo-http3/src/features/extended_connect/mod.rs Module exports for send groups and datagram outcomes
neqo-http3/src/features/extended_connect/datagram_queue.rs Datagram queuing with age/watermark management
neqo-http3/src/connection_server.rs Server-side datagram queue processing
neqo-http3/src/connection_client.rs Client API for send groups, stats, keying material, and draining
neqo-http3/src/connection.rs HTTP/3 connection send group plumbing and datagram queue integration
neqo-http3/src/client_events.rs New events for draining, stream quota, and datagram outcomes
neqo-crypto/tests/agent.rs Tests for keying material export functionality
neqo-crypto/src/err.rs InvalidState error variant
neqo-crypto/src/agent.rs TLS keying material export implementation
neqo-crypto/bindings/bindings.toml NSS binding for SSL_ExportKeyingMaterial
WEBTRANSPORT_IMPLEMENTATION_PLAN.md Implementation roadmap document


pub fn set_max_stream_data(&mut self, limit: u64) {
qdebug!("setting max_stream_data to {limit}");
let previous_limit = self.avail();
Copy link

Copilot AI Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The previous_limit is now calculated using self.avail() before checking the stream state. If the stream is not in Ready or Send state, self.avail() will return 0, which may not represent the actual previous limit. This could cause incorrect writable event emission. The original code correctly got the limit from the flow controller when it existed.

Copilot uses AI. Check for mistakes.
Comment on lines +902 to +903
Ok(())
// Http3Connection::stream_set_sendorder(&mut self.conn, stream_id, sendorder)
Copy link

Copilot AI Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sendorder setting logic is commented out and replaced with an unconditional Ok(()), which breaks the intended functionality. This appears to be debugging code that should be removed or the original implementation should be restored.

Suggested change
Ok(())
// Http3Connection::stream_set_sendorder(&mut self.conn, stream_id, sendorder)
Http3Connection::stream_set_sendorder(
&mut self.base_handler,
&mut self.conn,
stream_id,
sendorder,
)

Copilot uses AI. Check for mistakes.
Comment on lines +290 to +294
let negotiated_protocol = headers
.iter()
.find(|h| h.name().eq_ignore_ascii_case("wt-protocol"))
.and_then(|h| from_utf8(h.value()).ok())
.and_then(|s| {
Copy link

Copilot AI Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The header name 'wt-protocol' in the implementation differs from 'sec-webtransport-http3-draft' mentioned in the implementation plan (line 356 of WEBTRANSPORT_IMPLEMENTATION_PLAN.md). This discrepancy should be clarified in comments or documentation.

Copilot uses AI. Check for mistakes.
};

use neqo_common::{qdebug, qtrace, Bytes};

Copy link

Copilot AI Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The hard limit of 1000 datagrams is not documented. Consider adding a comment explaining why this specific value was chosen and whether it should be configurable.

Suggested change
// Default upper bound on the number of datagrams kept in the queue.
//
// This value is chosen as a conservative limit to prevent unbounded memory
// growth in typical WebTransport usage while still allowing a reasonably
// deep buffer for bursty traffic. It is only a default: the actual hard
// limit used by a `WebTransportDatagramQueue` instance can be overridden
// via its configuration/constructor when needed.

Copilot uses AI. Check for mistakes.
Comment on lines +1695 to +1699
#[allow(
clippy::allow_attributes,
clippy::missing_errors_doc,
reason = "OK here."
)]
Copy link

Copilot AI Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason 'OK here.' is not descriptive. This should explain why missing errors documentation is acceptable for this particular function.

Copilot uses AI. Check for mistakes.
@codspeed-hq
Copy link

codspeed-hq bot commented Feb 3, 2026

CodSpeed Performance Report

Merging this PR will degrade performance by 4.36%

Comparing jesup:push-otuuxuslzpln (cf6b347) with main (c0027fa)

Summary

❌ 2 regressed benchmarks
✅ 36 untouched benchmarks
⏩ 8 skipped benchmarks1

⚠️ Please fix the performance issues or acknowledge them on CodSpeed.

Performance Changes

Mode Benchmark BASE HEAD Efficiency
WallTime walltime/1000-streams/each-1000-bytes 140.3 ms 146.7 ms -4.36%
Simulation coalesce_acked_from_zero 1000+1 entries 2.5 µs 2.6 µs -3.34%

Footnotes

  1. 8 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 3, 2026

Failed Interop Tests

QUIC Interop Runner, client vs. server, differences relative to main at c0027fa.

neqo-pr as clientneqo-pr as server
neqo-pr vs. aioquic: A 🚀C1
neqo-pr vs. go-x-net: A BP BA
neqo-pr vs. haproxy: A 🚀L1 BP BA
neqo-pr vs. kwik: 🚀C1 ⚠️S BP BA
neqo-pr vs. linuxquic: A L1 ⚠️C1
neqo-pr vs. lsquic: L1 C1
neqo-pr vs. msquic: A L1 C1
neqo-pr vs. mvfst: A L1 🚀C1 BA
neqo-pr vs. neqo: Z
neqo-pr vs. nginx: A L1 ⚠️C1 BP BA
neqo-pr vs. ngtcp2: A CM
neqo-pr vs. picoquic: A 🚀L1 C1
neqo-pr vs. quic-go: A 🚀C1 ⚠️L1
neqo-pr vs. quiche: A ⚠️L1 C1 BP BA
neqo-pr vs. quinn: A
neqo-pr vs. s2n-quic: A 🚀C1 ⚠️L1 BP BA CM
neqo-pr vs. tquic: S A BP BA
neqo-pr vs. xquic: A L1 🚀C1 BA
aioquic vs. neqo-pr: Z 🚀L1 ⚠️C1 CM
go-x-net vs. neqo-pr: CM
kwik vs. neqo-pr: Z BP BA CM
linuxquic vs. neqo-pr: ⚠️BP BA
lsquic vs. neqo-pr: Z
msquic vs. neqo-pr: Z 🚀BA CM
mvfst vs. neqo-pr: Z A L1 C1 CM
neqo vs. neqo-pr: Z
openssl vs. neqo-pr: LR M A CM
picoquic vs. neqo-pr: Z
quic-go vs. neqo-pr: CM
quiche vs. neqo-pr: Z CM
quinn vs. neqo-pr: Z 🚀C1 V2 CM
s2n-quic vs. neqo-pr: 🚀B L1 BP CM
tquic vs. neqo-pr: Z CM
xquic vs. neqo-pr: M CM
All results

Succeeded Interop Tests

QUIC Interop Runner, client vs. server

neqo-pr as client

neqo-pr as server

Unsupported Interop Tests

QUIC Interop Runner, client vs. server

neqo-pr as client

neqo-pr as server

@github-actions
Copy link
Contributor

github-actions bot commented Feb 3, 2026

Client/server transfer results

Performance differences relative to c0027fa.

Transfer of 33554432 bytes over loopback, min. 100 runs. All unit-less numbers are in milliseconds.

Client vs. server (params) Mean ± σ Min Max MiB/s ± σ Δ main Δ main
neqo-neqo-newreno 98.8 ± 3.8 91.2 104.9 323.7 ± 8.4 💔 1.6 1.6%
neqo-neqo-newreno-nopacing 97.5 ± 4.2 87.8 105.9 328.2 ± 7.6 💔 1.8 1.9%
neqo-s2n-cubic 221.6 ± 4.5 212.7 232.2 144.4 ± 7.1 💔 1.4 0.6%
quiche-neqo-cubic 150.5 ± 7.6 137.8 185.8 212.6 ± 4.2 💚 -2.3 -1.5%
s2n-neqo-cubic 173.8 ± 6.0 162.0 211.4 184.1 ± 5.3 💚 -1.6 -0.9%

Table above only shows statistically significant changes. See all results below.

All results

Transfer of 33554432 bytes over loopback, min. 100 runs. All unit-less numbers are in milliseconds.

Client vs. server (params) Mean ± σ Min Max MiB/s ± σ Δ main Δ main
google-google-nopacing 464.9 ± 3.6 459.5 477.5 68.8 ± 8.9
google-neqo-cubic 274.3 ± 3.9 268.0 284.6 116.7 ± 8.2 -0.4 -0.1%
msquic-msquic-nopacing 176.3 ± 54.7 136.1 425.7 181.5 ± 0.6
msquic-neqo-cubic 199.0 ± 52.9 153.1 601.7 160.8 ± 0.6 11.1 5.9%
neqo-google-cubic 773.0 ± 8.7 701.6 786.7 41.4 ± 3.7 0.4 0.1%
neqo-msquic-cubic 160.0 ± 5.1 153.4 189.1 200.0 ± 6.3 0.2 0.1%
neqo-neqo-cubic 96.6 ± 4.3 88.1 106.3 331.4 ± 7.4 0.8 0.8%
neqo-neqo-cubic-nopacing 96.3 ± 4.4 87.5 106.7 332.5 ± 7.3 0.9 0.9%
neqo-neqo-newreno 98.8 ± 3.8 91.2 104.9 323.7 ± 8.4 💔 1.6 1.6%
neqo-neqo-newreno-nopacing 97.5 ± 4.2 87.8 105.9 328.2 ± 7.6 💔 1.8 1.9%
neqo-quiche-cubic 192.8 ± 4.4 186.6 206.2 166.0 ± 7.3 0.7 0.4%
neqo-s2n-cubic 221.6 ± 4.5 212.7 232.2 144.4 ± 7.1 💔 1.4 0.6%
quiche-neqo-cubic 150.5 ± 7.6 137.8 185.8 212.6 ± 4.2 💚 -2.3 -1.5%
quiche-quiche-nopacing 141.6 ± 4.5 135.5 155.8 225.9 ± 7.1
s2n-neqo-cubic 173.8 ± 6.0 162.0 211.4 184.1 ± 5.3 💚 -1.6 -0.9%
s2n-s2n-nopacing 251.6 ± 27.5 232.7 349.1 127.2 ± 1.2

Download data for profiler.firefox.com or download performance comparison data.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 3, 2026

Benchmark results

Significant performance differences relative to c0027fa.

streams/walltime/1000-streams/each-1-bytes: 💔 Performance has regressed by +1.8355%.
       time:   [12.585 ms 12.605 ms 12.625 ms]
       change: [+1.6404% +1.8355% +2.0333] (p = 0.00 < 0.05)
       Performance has regressed.
streams/walltime/1000-streams/each-1000-bytes: 💔 Performance has regressed by +7.2506%.
       time:   [48.466 ms 48.633 ms 48.898 ms]
       change: [+6.7283% +7.2506% +7.9155] (p = 0.00 < 0.05)
       Performance has regressed.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
All results
transfer/1-conn/1-100mb-resp (aka. Download)/mtu-1504: Change within noise threshold.
       time:   [201.25 ms 201.81 ms 202.45 ms]
       thrpt:  [493.95 MiB/s 495.52 MiB/s 496.89 MiB/s]
change:
       time:   [+0.3135% +0.6753% +1.0399] (p = 0.00 < 0.05)
       thrpt:  [-1.0292% -0.6708% -0.3125]
       Change within noise threshold.
Found 4 outliers among 100 measurements (4.00%)
1 (1.00%) high mild
3 (3.00%) high severe
transfer/1-conn/10_000-parallel-1b-resp (aka. RPS)/mtu-1504: No change in performance detected.
       time:   [281.06 ms 282.89 ms 284.76 ms]
       thrpt:  [35.117 Kelem/s 35.350 Kelem/s 35.580 Kelem/s]
change:
       time:   [-1.6036% -0.5538% +0.4339] (p = 0.30 > 0.05)
       thrpt:  [-0.4321% +0.5569% +1.6298]
       No change in performance detected.
transfer/1-conn/1-1b-resp (aka. HPS)/mtu-1504: No change in performance detected.
       time:   [38.561 ms 38.738 ms 38.936 ms]
       thrpt:  [25.683   B/s 25.814   B/s 25.933   B/s]
change:
       time:   [-0.7472% -0.0494% +0.6893] (p = 0.90 > 0.05)
       thrpt:  [-0.6846% +0.0495% +0.7528]
       No change in performance detected.
Found 9 outliers among 100 measurements (9.00%)
2 (2.00%) high mild
7 (7.00%) high severe
transfer/1-conn/1-100mb-req (aka. Upload)/mtu-1504: Change within noise threshold.
       time:   [207.76 ms 208.20 ms 208.75 ms]
       thrpt:  [479.03 MiB/s 480.31 MiB/s 481.32 MiB/s]
change:
       time:   [+0.4844% +0.8128% +1.1165] (p = 0.00 < 0.05)
       thrpt:  [-1.1042% -0.8062% -0.4821]
       Change within noise threshold.
Found 5 outliers among 100 measurements (5.00%)
4 (4.00%) high mild
1 (1.00%) high severe
decode 4096 bytes, mask ff: Change within noise threshold.
       time:   [5.3147 µs 5.3507 µs 5.4049 µs]
       change: [-1.0086% -0.5214% -0.0036] (p = 0.05 < 0.05)
       Change within noise threshold.
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severe
decode 1048576 bytes, mask ff: No change in performance detected.
       time:   [1.3656 ms 1.3699 ms 1.3742 ms]
       change: [-0.1787% +0.3289% +0.8102] (p = 0.20 > 0.05)
       No change in performance detected.
decode 4096 bytes, mask 7f: No change in performance detected.
       time:   [7.2290 µs 7.2397 µs 7.2505 µs]
       change: [-0.2968% +0.2100% +0.8284] (p = 0.54 > 0.05)
       No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
2 (2.00%) high mild
2 (2.00%) high severe
decode 1048576 bytes, mask 7f: No change in performance detected.
       time:   [1.8568 ms 1.8594 ms 1.8622 ms]
       change: [-0.2217% -0.0255% +0.1793] (p = 0.81 > 0.05)
       No change in performance detected.
decode 4096 bytes, mask 3f: No change in performance detected.
       time:   [6.9025 µs 6.9124 µs 6.9225 µs]
       change: [-0.7198% -0.1616% +0.2994] (p = 0.57 > 0.05)
       No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
2 (2.00%) high mild
2 (2.00%) high severe
decode 1048576 bytes, mask 3f: No change in performance detected.
       time:   [1.7649 ms 1.7675 ms 1.7703 ms]
       change: [-0.1483% +0.0541% +0.2469] (p = 0.60 > 0.05)
       No change in performance detected.
streams/simulated/1-streams/each-1000-bytes: No change in performance detected.
       time:   [129.68 ms 129.68 ms 129.69 ms]
       thrpt:  [7.5302 KiB/s 7.5304 KiB/s 7.5306 KiB/s]
change:
       time:   [-0.0034% +0.0002% +0.0039] (p = 0.93 > 0.05)
       thrpt:  [-0.0039% -0.0002% +0.0034]
       No change in performance detected.
Found 3 outliers among 100 measurements (3.00%)
1 (1.00%) low mild
2 (2.00%) high mild
streams/simulated/1000-streams/each-1-bytes: Change within noise threshold.
       time:   [2.5359 s 2.5362 s 2.5365 s]
       thrpt:  [394.25   B/s 394.30   B/s 394.34   B/s]
change:
       time:   [-0.0378% -0.0208% -0.0044] (p = 0.02 < 0.05)
       thrpt:  [+0.0044% +0.0208% +0.0378]
       Change within noise threshold.
Found 4 outliers among 100 measurements (4.00%)
4 (4.00%) high mild
streams/simulated/1000-streams/each-1000-bytes: No change in performance detected.
       time:   [6.5837 s 6.5978 s 6.6146 s]
       thrpt:  [147.64 KiB/s 148.01 KiB/s 148.33 KiB/s]
change:
       time:   [-0.2751% -0.0122% +0.2643] (p = 0.93 > 0.05)
       thrpt:  [-0.2636% +0.0122% +0.2759]
       No change in performance detected.
Found 13 outliers among 100 measurements (13.00%)
8 (8.00%) high mild
5 (5.00%) high severe
streams/walltime/1-streams/each-1000-bytes: Change within noise threshold.
       time:   [589.69 µs 591.57 µs 593.85 µs]
       change: [+0.6674% +1.0541% +1.4718] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 5 outliers among 100 measurements (5.00%)
5 (5.00%) high severe
streams/walltime/1000-streams/each-1-bytes: 💔 Performance has regressed by +1.8355%.
       time:   [12.585 ms 12.605 ms 12.625 ms]
       change: [+1.6404% +1.8355% +2.0333] (p = 0.00 < 0.05)
       Performance has regressed.
streams/walltime/1000-streams/each-1000-bytes: 💔 Performance has regressed by +7.2506%.
       time:   [48.466 ms 48.633 ms 48.898 ms]
       change: [+6.7283% +7.2506% +7.9155] (p = 0.00 < 0.05)
       Performance has regressed.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
coalesce_acked_from_zero 1+1 entries: No change in performance detected.
       time:   [92.183 ns 92.560 ns 92.936 ns]
       change: [-0.8805% -0.3517% +0.1421] (p = 0.18 > 0.05)
       No change in performance detected.
Found 12 outliers among 100 measurements (12.00%)
10 (10.00%) high mild
2 (2.00%) high severe
coalesce_acked_from_zero 3+1 entries: No change in performance detected.
       time:   [109.71 ns 110.01 ns 110.33 ns]
       change: [-0.3644% -0.0200% +0.2849] (p = 0.91 > 0.05)
       No change in performance detected.
Found 11 outliers among 100 measurements (11.00%)
4 (4.00%) high mild
7 (7.00%) high severe
coalesce_acked_from_zero 10+1 entries: No change in performance detected.
       time:   [109.10 ns 109.47 ns 109.92 ns]
       change: [-0.6781% +0.8239% +3.2824] (p = 0.62 > 0.05)
       No change in performance detected.
Found 13 outliers among 100 measurements (13.00%)
2 (2.00%) low severe
3 (3.00%) low mild
3 (3.00%) high mild
5 (5.00%) high severe
coalesce_acked_from_zero 1000+1 entries: No change in performance detected.
       time:   [94.976 ns 95.197 ns 95.507 ns]
       change: [-0.6242% -0.0317% +0.5680] (p = 0.92 > 0.05)
       No change in performance detected.
Found 7 outliers among 100 measurements (7.00%)
3 (3.00%) high mild
4 (4.00%) high severe
RxStreamOrderer::inbound_frame(): Change within noise threshold.
       time:   [109.72 ms 109.81 ms 109.90 ms]
       change: [+0.1794% +0.3875% +0.5494] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 15 outliers among 100 measurements (15.00%)
1 (1.00%) low severe
10 (10.00%) low mild
3 (3.00%) high mild
1 (1.00%) high severe
sent::Packets::take_ranges: No change in performance detected.
       time:   [4.5025 µs 4.6283 µs 4.7557 µs]
       change: [-2.6969% +0.7360% +4.5150] (p = 0.68 > 0.05)
       No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
transfer/simulated/pacing-false/varying-seeds: No change in performance detected.
       time:   [23.941 s 23.941 s 23.941 s]
       thrpt:  [171.09 KiB/s 171.09 KiB/s 171.09 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.
transfer/simulated/pacing-true/varying-seeds: No change in performance detected.
       time:   [23.676 s 23.676 s 23.676 s]
       thrpt:  [173.01 KiB/s 173.01 KiB/s 173.01 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.
transfer/simulated/pacing-false/same-seed: No change in performance detected.
       time:   [23.941 s 23.941 s 23.941 s]
       thrpt:  [171.09 KiB/s 171.09 KiB/s 171.09 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.
transfer/simulated/pacing-true/same-seed: No change in performance detected.
       time:   [23.676 s 23.676 s 23.676 s]
       thrpt:  [173.01 KiB/s 173.01 KiB/s 173.01 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.
transfer/walltime/pacing-false/varying-seeds: Change within noise threshold.
       time:   [23.458 ms 23.488 ms 23.532 ms]
       change: [+1.6206% +1.8364% +2.0688] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 3 outliers among 100 measurements (3.00%)
2 (2.00%) high mild
1 (1.00%) high severe
transfer/walltime/pacing-true/varying-seeds: Change within noise threshold.
       time:   [23.805 ms 23.837 ms 23.888 ms]
       change: [+0.7782% +0.9483% +1.1716] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
transfer/walltime/pacing-false/same-seed: Change within noise threshold.
       time:   [23.579 ms 23.600 ms 23.622 ms]
       change: [+0.8255% +0.9556% +1.0751] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 2 outliers among 100 measurements (2.00%)
2 (2.00%) high mild
transfer/walltime/pacing-true/same-seed: Change within noise threshold.
       time:   [23.795 ms 23.827 ms 23.875 ms]
       change: [+0.5357% +0.7851% +1.0010] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe

Download data for profiler.firefox.com or download performance comparison data.

@martinthomson
Copy link
Member

Two things:

  1. Can you split out the change?
  2. Can you produce a test case that would have failed prior to the fix? (Ideally, this operates at the connection level, but I understand that this might need to just work on internal objects.)

@larseggert larseggert linked an issue Feb 4, 2026 that may be closed by this pull request
@larseggert
Copy link
Collaborator

There is a ton of copy&paste repetition in the test code. Can we make this more DRY?

@larseggert
Copy link
Collaborator

@jesup is the key export stuff here addressing https://bugzilla.mozilla.org/show_bug.cgi?id=2007201?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Small improvement to set_max_stream_data()

4 participants