Skip to content

feat(transport): make max_udp_payload_size tparam configurable#3207

Draft
mxinden wants to merge 2 commits intomozilla:mainfrom
mxinden:param-max-udp
Draft

feat(transport): make max_udp_payload_size tparam configurable#3207
mxinden wants to merge 2 commits intomozilla:mainfrom
mxinden:param-max-udp

Conversation

@mxinden
Copy link
Member

@mxinden mxinden commented Dec 5, 2025

Some QUIC servers use an initial MTU larger than 1280 bytes. This is problematic when connecting to such servers via a proxy (e.g. MASQUE connect-udp). While most unproxied paths can handle >1280 bytes, some proxied paths cannot.

Firefox will need to restrict the max_udp_payload_size to 1232 bytes (i.e. 1232 + 40 (v6) + 8 (UDP)) on proxied connections to support such restrictive paths.

Some QUIC servers use an initial MTU larger than 1280 bytes. This is problematic
when connecting to such servers via a proxy (e.g. MASQUE connect-udp). While
most unproxied paths can handle >1280 bytes, some proxied paths cannot.

Firefox will need to restrict the max_udp_payload_size to 1232 bytes (i.e. 1232
+ 40 (v6) + 8 (UDP)) on proxied connections to support such restrictive paths.
@codecov
Copy link

codecov bot commented Dec 5, 2025

Codecov Report

❌ Patch coverage is 9.09091% with 10 lines in your changes missing coverage. Please review.
✅ Project coverage is 93.93%. Comparing base (1913e3d) to head (f4e1b14).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3207      +/-   ##
==========================================
- Coverage   93.99%   93.93%   -0.06%     
==========================================
  Files         124      124              
  Lines       37597    37608      +11     
  Branches    37597    37608      +11     
==========================================
- Hits        35340    35328      -12     
- Misses       1392     1412      +20     
- Partials      865      868       +3     
Components Coverage Δ
neqo-common 98.54% <ø> (ø)
neqo-crypto 84.39% <ø> (-0.47%) ⬇️
neqo-http3 93.86% <ø> (ø)
neqo-qpack 94.67% <ø> (ø)
neqo-transport 94.76% <9.09%> (-0.08%) ⬇️
neqo-udp 82.84% <ø> (+0.41%) ⬆️
mtu 88.94% <ø> (ø)

mxinden added a commit to mxinden/firefox that referenced this pull request Dec 5, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Dec 5, 2025

🐰 Bencher Report

Branchparam-max-udp
TestbedOn-prem
Click to view all benchmark results
BenchmarkLatencyBenchmark Result
nanoseconds (ns)
(Result Δ%)
Upper Boundary
nanoseconds (ns)
(Limit %)
1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client📈 view plot
🚷 view threshold
203,180,000.00 ns
(-2.28%)Baseline: 207,916,450.62 ns
216,896,744.57 ns
(93.68%)
1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client📈 view plot
🚷 view threshold
200,330,000.00 ns
(-1.04%)Baseline: 202,440,169.75 ns
211,911,657.44 ns
(94.53%)
1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client📈 view plot
🚷 view threshold
38,627,000.00 ns
(+10.55%)Baseline: 34,939,854.94 ns
46,509,149.03 ns
(83.05%)
1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client📈 view plot
🚷 view threshold
281,820,000.00 ns
(-2.17%)Baseline: 288,069,691.36 ns
301,060,092.17 ns
(93.61%)
1-streams/each-1000-bytes/simulated-time📈 view plot
🚷 view threshold
118,950,000.00 ns
(+0.08%)Baseline: 118,859,675.93 ns
120,423,236.27 ns
(98.78%)
1-streams/each-1000-bytes/wallclock-time📈 view plot
🚷 view threshold
583,870.00 ns
(-0.88%)Baseline: 589,041.28 ns
608,661.92 ns
(95.93%)
1000-streams/each-1-bytes/simulated-time📈 view plot
🚷 view threshold
2,333,100,000.00 ns
(-71.80%)Baseline: 8,273,568,827.16 ns
23,011,211,211.58 ns
(10.14%)
1000-streams/each-1-bytes/wallclock-time📈 view plot
🚷 view threshold
12,434,000.00 ns
(-5.78%)Baseline: 13,196,964.51 ns
15,049,423.60 ns
(82.62%)
1000-streams/each-1000-bytes/simulated-time📈 view plot
🚷 view threshold
16,323,000,000.00 ns
(-7.15%)Baseline: 17,580,388,888.89 ns
20,627,790,640.28 ns
(79.13%)
1000-streams/each-1000-bytes/wallclock-time📈 view plot
🚷 view threshold
50,291,000.00 ns
(-0.36%)Baseline: 50,471,824.07 ns
55,233,588.97 ns
(91.05%)
RxStreamOrderer::inbound_frame()📈 view plot
🚷 view threshold
108,900,000.00 ns
(-0.65%)Baseline: 109,613,734.57 ns
111,420,930.89 ns
(97.74%)
coalesce_acked_from_zero 1+1 entries📈 view plot
🚷 view threshold
89.65 ns
(+0.47%)Baseline: 89.23 ns
90.60 ns
(98.96%)
coalesce_acked_from_zero 10+1 entries📈 view plot
🚷 view threshold
105.70 ns
(-0.29%)Baseline: 106.01 ns
107.07 ns
(98.72%)
coalesce_acked_from_zero 1000+1 entries📈 view plot
🚷 view threshold
92.84 ns
(+2.07%)Baseline: 90.96 ns
95.21 ns
(97.51%)
coalesce_acked_from_zero 3+1 entries📈 view plot
🚷 view threshold
106.27 ns
(-0.23%)Baseline: 106.51 ns
107.49 ns
(98.87%)
decode 1048576 bytes, mask 3f📈 view plot
🚷 view threshold
1,421,600.00 ns
(-16.86%)Baseline: 1,709,973.15 ns
2,491,081.15 ns
(57.07%)
decode 1048576 bytes, mask 7f📈 view plot
🚷 view threshold
1,475,200.00 ns
(-66.05%)Baseline: 4,344,901.23 ns
7,557,291.31 ns
(19.52%)
decode 1048576 bytes, mask ff📈 view plot
🚷 view threshold
1,163,200.00 ns
(-56.20%)Baseline: 2,655,450.00 ns
4,308,129.84 ns
(27.00%)
decode 4096 bytes, mask 3f📈 view plot
🚷 view threshold
5,548.10 ns
(-21.09%)Baseline: 7,031.15 ns
10,837.53 ns
(51.19%)
decode 4096 bytes, mask 7f📈 view plot
🚷 view threshold
5,806.40 ns
(-65.77%)Baseline: 16,960.47 ns
29,503.43 ns
(19.68%)
decode 4096 bytes, mask ff📈 view plot
🚷 view threshold
4,520.10 ns
(-54.92%)Baseline: 10,027.78 ns
16,139.90 ns
(28.01%)
sent::Packets::take_ranges📈 view plot
🚷 view threshold
4,502.00 ns
(-3.65%)Baseline: 4,672.62 ns
4,910.19 ns
(91.69%)
transfer/pacing-false/same-seed/simulated-time/run📈 view plot
🚷 view threshold
23,941,000,000.00 ns
(-4.42%)Baseline: 25,047,339,009.29 ns
26,487,231,770.17 ns
(90.39%)
transfer/pacing-false/same-seed/wallclock-time/run📈 view plot
🚷 view threshold
23,385,000.00 ns
(-6.50%)Baseline: 25,009,817.34 ns
27,289,661.85 ns
(85.69%)
transfer/pacing-false/varying-seeds/simulated-time/run📈 view plot
🚷 view threshold
23,941,000,000.00 ns
(-3.90%)Baseline: 24,912,263,157.89 ns
26,104,613,022.61 ns
(91.71%)
transfer/pacing-false/varying-seeds/wallclock-time/run📈 view plot
🚷 view threshold
23,142,000.00 ns
(-7.85%)Baseline: 25,113,065.02 ns
27,592,088.20 ns
(83.87%)
transfer/pacing-true/same-seed/simulated-time/run📈 view plot
🚷 view threshold
23,676,000,000.00 ns
(-5.47%)Baseline: 25,045,541,795.67 ns
26,803,225,262.35 ns
(88.33%)
transfer/pacing-true/same-seed/wallclock-time/run📈 view plot
🚷 view threshold
23,808,000.00 ns
(-8.58%)Baseline: 26,043,438.08 ns
29,160,393.40 ns
(81.64%)
transfer/pacing-true/varying-seeds/simulated-time/run📈 view plot
🚷 view threshold
23,676,000,000.00 ns
(-4.18%)Baseline: 24,708,346,749.23 ns
25,976,009,560.60 ns
(91.15%)
transfer/pacing-true/varying-seeds/wallclock-time/run📈 view plot
🚷 view threshold
23,710,000.00 ns
(-7.36%)Baseline: 25,593,750.77 ns
28,197,640.79 ns
(84.09%)
🐰 View full continuous benchmarking report in Bencher

@mxinden mxinden marked this pull request as ready for review December 5, 2025 12:27
Copilot AI review requested due to automatic review settings December 5, 2025 12:27
@mxinden mxinden requested a review from larseggert as a code owner December 5, 2025 12:27
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for configuring the max_udp_payload_size transport parameter to enable Firefox to restrict UDP payload sizes on proxied QUIC connections (e.g., through MASQUE connect-udp). The change allows setting a custom MTU limit (e.g., 1232 bytes) when the standard 1280-byte minimum is too large for certain proxied network paths.

Key Changes:

  • Added optional max_udp_payload_size field to ConnectionParameters struct
  • Implemented builder method to configure the transport parameter
  • Integrated parameter into transport parameters encoding when set

@github-actions
Copy link
Contributor

github-actions bot commented Dec 5, 2025

Failed Interop Tests

QUIC Interop Runner, client vs. server, differences relative to d070393.

neqo-latest as clientneqo-latest as server
neqo-latest vs. aioquic: A L1 ⚠️C1
neqo-latest vs. go-x-net: A BP BA
neqo-latest vs. haproxy: A ⚠️L1 BP BA
neqo-latest vs. kwik: BP BA
neqo-latest vs. linuxquic: A 🚀L1
neqo-latest vs. lsquic: run cancelled after 20 min
neqo-latest vs. msquic: ⚠️Z A L1 C1 🚀C2
neqo-latest vs. mvfst: A L1 ⚠️BA
neqo-latest vs. nginx: A 🚀L1 C1 BP BA
neqo-latest vs. ngtcp2: A C1 CM
neqo-latest vs. picoquic: Z A L1 C1
neqo-latest vs. quic-go: A ⚠️C1
neqo-latest vs. quiche: A 🚀C1 BP BA
neqo-latest vs. quinn: A ⚠️L1
neqo-latest vs. s2n-quic: A BA CM
neqo-latest vs. tquic: S A 🚀C1 BP BA
neqo-latest vs. xquic: A L1 🚀C1
aioquic vs. neqo-latest: ⚠️C1 CM
chrome vs. neqo-latest: 3
go-x-net vs. neqo-latest: CM
kwik vs. neqo-latest: BP BA CM
msquic vs. neqo-latest: ⚠️BP CM
mvfst vs. neqo-latest: 🚀M Z A L1 C1 CM
neqo vs. neqo-latest: ⚠️BP
openssl vs. neqo-latest: LR M A CM
quic-go vs. neqo-latest: CM
quiche vs. neqo-latest: 🚀C1 CM
quinn vs. neqo-latest: ⚠️L1 C1 V2 CM
s2n-quic vs. neqo-latest: CM
tquic vs. neqo-latest: CM
xquic vs. neqo-latest: M CM
All results

Succeeded Interop Tests

QUIC Interop Runner, client vs. server

neqo-latest as client

neqo-latest as server

Unsupported Interop Tests

QUIC Interop Runner, client vs. server

neqo-latest as client

neqo-latest as server

@martinthomson
Copy link
Member

Thinking about this more, how does this really help with connect-udp? If the proxy has limits on the size of things it can handle in the server-to-client direction, that might be cause for us to let the server know of those limits, but that's not a limit we are made aware of, is it? Any limit we set should be based on our own limits, which we only have by virtue of knowing the local interface MTU (maybe), because we don't have any real limits on receiving ourselves.

In the client-to-server direction, we might benefit from knowing about both server and proxy limits, but those are just things we can use to restrict our MTU; we don't need configuration for that.

mxinden added a commit to mxinden/firefox that referenced this pull request Dec 8, 2025
mxinden added a commit to mxinden/firefox that referenced this pull request Dec 8, 2025
mxinden added a commit to mxinden/firefox that referenced this pull request Dec 8, 2025
mxinden added a commit to mxinden/firefox that referenced this pull request Dec 10, 2025
@larseggert
Copy link
Collaborator

@mxinden see @martinthomson's comments – anything you want to address in response, or should I click the merge button?

mxinden added a commit to mxinden/firefox that referenced this pull request Dec 10, 2025
@mxinden
Copy link
Member Author

mxinden commented Dec 10, 2025

If the proxy has limits on the size of things it can handle in the server-to-client direction, that might be cause for us to let the server know of those limits, but that's not a limit we are made aware of, is it?

Correct. Not a limit we are aware of. Thus far, we have simply applied this conservative limit to all proxied client<->server connections.

This is a trade-off. We can:

  • either not set max_udp_payload_size, thus using > 1280 MTU on paths (server -> client) that support it,
  • set max_udp_payload_size and thus support paths (server -> client) with an 1280 MTU AND servers that start with an > 1280 MTU (unless max_udp_payload_size is set).

Alternatively we could fallback to HTTP CONNECT on paths with an 1280 MTU AND servers that start with an > 1280 MTU.

@martinthomson can you think of any other approaches?

Would it make sense to advocate major players (e.g. Fastly) to start with an MTU of 1280 and using PMTUD thereafter, instead of starting with a larger initial MTU?

@mxinden
Copy link
Member Author

mxinden commented Dec 15, 2025

Friendly ping @martinthomson. Do you have thoughts on the above?

@Propheticus
Copy link

Propheticus commented Dec 15, 2025

Would it make sense to advocate major players (e.g. Fastly) to start with an MTU of 1280

(First of, please let me know if this is annoying and I'll remove and refrain from commenting)

Reading the RFC9000 spec section on datagram size, this looks like the right approach.

Specifically section 14.2 states

An endpoint SHOULD use DPLPMTUD (Section 14.3) or PMTUD (Section 14.2.1) to determine whether the path to a destination will support a desired maximum datagram size without fragmentation. In the absence of these mechanisms, QUIC endpoints SHOULD NOT send datagrams larger than the smallest allowed maximum datagram size.

and the section 14.3 explaining DPLPMTUD states

Endpoints SHOULD set the initial value of BASE_PLPMTU (Section 5.1 of [DPLPMTUD]) to be consistent with QUIC's smallest allowed maximum datagram size. The MIN_PLPMTU is the same as the BASE_PLPMTU.

The smallest allowed maximum is 1200. (which is also the minimum, so anything smaller needs padding to this size.)

But then again 14.1 on initial size contains

Datagrams containing Initial packets MAY exceed 1200 bytes if the sender believes that the network path and peer both support the size that it chooses.

To me it's unclear how or why the sender would assume / believe anything without negotiating or discovering first though.
All you can safely assume is

QUIC assumes a minimum IP packet size of at least 1280 bytes


So yes it would make sense to advocate senders, like Fastly, to start at an MTU of 1280.

Using max_udp_payload_size which is meant to (and is documented to) convey limits in the capabilities of the endpoint to solve limitations encountered in the path feels hacky. It could work but introduces deviations that in 5 years time will be a nice headache for an engineer to troubleshoot because it's unexpected behaviour.
Fixing someone else's problems like this is friendly but also adds debt that can keep growing with other workarounds needed to fix side-effects of the out-of-spec usage.

@mxinden
Copy link
Member Author

mxinden commented Dec 15, 2025

(First of, please let me know if this is annoying and I'll remove and refrain from commenting)

High quality contributions, e.g. yours above, are always very welcome. Thanks @Propheticus!

@martinthomson
Copy link
Member

The real challenge here is that servers that start with a too-large MTU will simply fail to connect in many cases. Transport parameters are almost deliberately not available to servers in the first round trip. This is because we take steps to shuffle the TLS ClientHello in ways that make it likely that transport parameters don't appear in the first packet. In that case, anything we say in transport parameters isn't going to help much.

In the case where the proxy has symmetric limits (same size limit on up- and down-stream -- a very reasonable assumption, even if not 100% guaranteed), there might be some value in us changing our actions, even if signaling doesn't work. But that is limited to helping servers that echo the client MTU size.

Those servers will already work best, so there is some value in encouraging servers to mirror the client packet size in their responses. That would allow us to use information we have about the proxy MTU size limits to affect connection success rates. It also lets us potentially probe for the handshake (first transmission with a high MTU, subsequent/retransmission packets with lower; ACKs of the first confirm that the higher MTU works).

I guess that my conclusion is that the transport parameter isn't much help, except in communications with the proxy. It's analogous to knowing the MTU of the associated network interface. It's not everything, but if you make some assumptions you can do a little better than the naive 1200.

@codspeed-hq
Copy link

codspeed-hq bot commented Jan 12, 2026

Merging this PR will degrade performance by 8.66%

⚡ 1 improved benchmark
❌ 6 regressed benchmarks
✅ 16 untouched benchmarks

⚠️ Please fix the performance issues or acknowledge them on CodSpeed.

Performance Changes

Mode Benchmark BASE HEAD Efficiency
Simulation client 816.2 ms 767.5 ms +6.34%
Simulation wallclock-time 1 ms 1.1 ms -3.15%
Simulation wallclock-time 32.3 ms 35.3 ms -8.66%
Simulation coalesce_acked_from_zero 1+1 entries 2.9 µs 3 µs -3.92%
Simulation coalesce_acked_from_zero 10+1 entries 3 µs 3.1 µs -3.71%
Simulation coalesce_acked_from_zero 3+1 entries 3 µs 3.1 µs -3.71%
Simulation coalesce_acked_from_zero 1000+1 entries 2.6 µs 2.8 µs -7.32%

Comparing mxinden:param-max-udp (f4e1b14) with main (e74f986)1

Open in CodSpeed

Footnotes

  1. No successful run was found on main (1913e3d) during the generation of this report, so e74f986 was used instead as the comparison base. There might be some changes unrelated to this pull request in this report.

@github-actions
Copy link
Contributor

Client/server transfer results

Performance differences relative to 1913e3d.

Transfer of 33554432 bytes over loopback, min. 100 runs. All unit-less numbers are in milliseconds.

Client vs. server (params) Mean ± σ Min Max MiB/s ± σ Δ main Δ main
google-neqo-cubic 271.7 ± 3.8 261.6 280.2 117.8 ± 8.4 💔 1.9 0.7%
neqo-msquic-cubic 161.3 ± 4.7 154.6 175.3 198.3 ± 6.8 💔 1.3 0.8%
neqo-quiche-cubic 193.3 ± 3.7 187.8 201.8 165.5 ± 8.6 💚 -1.2 -0.6%
neqo-s2n-cubic 224.4 ± 4.3 214.3 234.8 142.6 ± 7.4 💔 3.8 1.7%

Table above only shows statistically significant changes. See all results below.

All results

Transfer of 33554432 bytes over loopback, min. 100 runs. All unit-less numbers are in milliseconds.

Client vs. server (params) Mean ± σ Min Max MiB/s ± σ Δ main Δ main
google-google-nopacing 459.2 ± 4.0 453.1 470.2 69.7 ± 8.0
google-neqo-cubic 271.7 ± 3.8 261.6 280.2 117.8 ± 8.4 💔 1.9 0.7%
msquic-msquic-nopacing 200.5 ± 73.7 140.5 432.2 159.6 ± 0.4
msquic-neqo-cubic 214.0 ± 60.9 159.0 489.7 149.5 ± 0.5 -1.9 -0.9%
neqo-google-cubic 764.8 ± 4.2 757.2 776.8 41.8 ± 7.6 0.1 0.0%
neqo-msquic-cubic 161.3 ± 4.7 154.6 175.3 198.3 ± 6.8 💔 1.3 0.8%
neqo-neqo-cubic 97.2 ± 4.8 88.5 114.1 329.1 ± 6.7 -0.6 -0.7%
neqo-neqo-cubic-nopacing 96.0 ± 4.2 88.0 105.4 333.2 ± 7.6 0.3 0.3%
neqo-neqo-newreno 96.4 ± 4.7 88.4 107.3 331.9 ± 6.8 0.5 0.5%
neqo-neqo-newreno-nopacing 95.8 ± 4.2 89.5 105.1 334.0 ± 7.6 0.1 0.1%
neqo-quiche-cubic 193.3 ± 3.7 187.8 201.8 165.5 ± 8.6 💚 -1.2 -0.6%
neqo-s2n-cubic 224.4 ± 4.3 214.3 234.8 142.6 ± 7.4 💔 3.8 1.7%
quiche-neqo-cubic 155.4 ± 6.4 143.2 183.2 205.9 ± 5.0 1.3 0.8%
quiche-quiche-nopacing 145.5 ± 4.8 137.9 157.2 220.0 ± 6.7
s2n-neqo-cubic 174.5 ± 4.6 165.6 184.3 183.3 ± 7.0 0.5 0.3%
s2n-s2n-nopacing 248.2 ± 20.3 232.4 352.9 128.9 ± 1.6

Download data for profiler.firefox.com or download performance comparison data.

@github-actions
Copy link
Contributor

🐰 Bencher Report

Branchparam-max-udp
TestbedOn-prem

🚨 1 Alert

IterationBenchmarkMeasure
Units
ViewBenchmark Result
(Result Δ%)
Upper Boundary
(Limit %)
9neqo-s2n-cubicLatency
milliseconds (ms)
📈 plot
🚷 threshold
🚨 alert (🔔)
224.40 ms
(+1.49%)Baseline: 221.11 ms
224.32 ms
(100.03%)

Click to view all benchmark results
BenchmarkLatencyBenchmark Result
milliseconds (ms)
(Result Δ%)
Upper Boundary
milliseconds (ms)
(Limit %)
google-neqo-cubic📈 view plot
🚷 view threshold
271.69 ms
(-1.03%)Baseline: 274.51 ms
284.07 ms
(95.64%)
BenchmarkLatencyBenchmark Result
milliseconds (ms)
(Result Δ%)
Upper Boundary
milliseconds (ms)
(Limit %)
msquic-neqo-cubic📈 view plot
🚷 view threshold
213.98 ms
(+3.43%)Baseline: 206.87 ms
240.95 ms
(88.81%)
BenchmarkLatencyBenchmark Result
milliseconds (ms)
(Result Δ%)
Upper Boundary
milliseconds (ms)
(Limit %)
neqo-google-cubic📈 view plot
🚷 view threshold
764.82 ms
(+0.59%)Baseline: 760.34 ms
787.66 ms
(97.10%)
BenchmarkLatencyBenchmark Result
milliseconds (ms)
(Result Δ%)
Upper Boundary
milliseconds (ms)
(Limit %)
neqo-msquic-cubic📈 view plot
🚷 view threshold
161.34 ms
(+1.46%)Baseline: 159.02 ms
162.46 ms
(99.31%)
BenchmarkLatencyBenchmark Result
milliseconds (ms)
(Result Δ%)
Upper Boundary
milliseconds (ms)
(Limit %)
neqo-neqo-cubic-nopacing📈 view plot
🚷 view threshold
96.04 ms
(-0.30%)Baseline: 96.33 ms
98.49 ms
(97.52%)
BenchmarkLatencyBenchmark Result
milliseconds (ms)
(Result Δ%)
Upper Boundary
milliseconds (ms)
(Limit %)
neqo-neqo-cubic📈 view plot
🚷 view threshold
97.23 ms
(-0.13%)Baseline: 97.35 ms
99.54 ms
(97.67%)
BenchmarkLatencyBenchmark Result
milliseconds (ms)
(Result Δ%)
Upper Boundary
milliseconds (ms)
(Limit %)
neqo-neqo-newreno-nopacing📈 view plot
🚷 view threshold
95.82 ms
(+0.21%)Baseline: 95.62 ms
97.66 ms
(98.12%)
BenchmarkLatencyBenchmark Result
milliseconds (ms)
(Result Δ%)
Upper Boundary
milliseconds (ms)
(Limit %)
neqo-neqo-newreno📈 view plot
🚷 view threshold
96.41 ms
(+0.29%)Baseline: 96.13 ms
98.03 ms
(98.34%)
BenchmarkLatencyBenchmark Result
milliseconds (ms)
(Result Δ%)
Upper Boundary
milliseconds (ms)
(Limit %)
neqo-quiche-cubic📈 view plot
🚷 view threshold
193.31 ms
(+0.46%)Baseline: 192.43 ms
195.48 ms
(98.89%)
BenchmarkLatencyBenchmark Result
milliseconds (ms)
(Result Δ%)
Upper Boundary
milliseconds (ms)
(Limit %)
neqo-s2n-cubic📈 view plot
🚷 view threshold
🚨 view alert (🔔)
224.40 ms
(+1.49%)Baseline: 221.11 ms
224.32 ms
(100.03%)

BenchmarkLatencyBenchmark Result
milliseconds (ms)
(Result Δ%)
Upper Boundary
milliseconds (ms)
(Limit %)
quiche-neqo-cubic📈 view plot
🚷 view threshold
155.39 ms
(+1.16%)Baseline: 153.60 ms
157.01 ms
(98.97%)
BenchmarkLatencyBenchmark Result
milliseconds (ms)
(Result Δ%)
Upper Boundary
milliseconds (ms)
(Limit %)
s2n-neqo-cubic📈 view plot
🚷 view threshold
174.54 ms
(+0.34%)Baseline: 173.96 ms
176.59 ms
(98.84%)
🐰 View full continuous benchmarking report in Bencher

@github-actions
Copy link
Contributor

Benchmark results

Significant performance differences relative to 1913e3d.

1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client: 💚 Performance has improved by -2.2668%.
       time:   [202.80 ms 203.18 ms 203.64 ms]
       thrpt:  [491.06 MiB/s 492.18 MiB/s 493.11 MiB/s]
change:
       time:   [-2.5999% -2.2668% -1.9562] (p = 0.00 < 0.05)
       thrpt:  [+1.9953% +2.3194% +2.6693]
       Performance has improved.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
All results
1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client: Change within noise threshold.
       time:   [199.94 ms 200.33 ms 200.78 ms]
       thrpt:  [498.06 MiB/s 499.17 MiB/s 500.16 MiB/s]
change:
       time:   [-1.2584% -1.0046% -0.7320] (p = 0.00 < 0.05)
       thrpt:  [+0.7374% +1.0148% +1.2744]
       Change within noise threshold.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client: No change in performance detected.
       time:   [279.65 ms 281.82 ms 284.06 ms]
       thrpt:  [35.204 Kelem/s 35.483 Kelem/s 35.760 Kelem/s]
change:
       time:   [-0.8460% +0.1706% +1.2046] (p = 0.75 > 0.05)
       thrpt:  [-1.1903% -0.1703% +0.8533]
       No change in performance detected.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client: No change in performance detected.
       time:   [38.465 ms 38.627 ms 38.804 ms]
       thrpt:  [25.770   B/s 25.889   B/s 25.998   B/s]
change:
       time:   [-0.7363% -0.1382% +0.4706] (p = 0.65 > 0.05)
       thrpt:  [-0.4684% +0.1384% +0.7417]
       No change in performance detected.
Found 10 outliers among 100 measurements (10.00%)
3 (3.00%) high mild
7 (7.00%) high severe
1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client: 💚 Performance has improved by -2.2668%.
       time:   [202.80 ms 203.18 ms 203.64 ms]
       thrpt:  [491.06 MiB/s 492.18 MiB/s 493.11 MiB/s]
change:
       time:   [-2.5999% -2.2668% -1.9562] (p = 0.00 < 0.05)
       thrpt:  [+1.9953% +2.3194% +2.6693]
       Performance has improved.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
decode 4096 bytes, mask ff: No change in performance detected.
       time:   [4.5129 µs 4.5201 µs 4.5273 µs]
       change: [-0.4425% -0.1491% +0.1729] (p = 0.33 > 0.05)
       No change in performance detected.
Found 5 outliers among 100 measurements (5.00%)
3 (3.00%) high mild
2 (2.00%) high severe
decode 1048576 bytes, mask ff: No change in performance detected.
       time:   [1.1611 ms 1.1632 ms 1.1656 ms]
       change: [-0.7592% +0.1965% +1.1590] (p = 0.69 > 0.05)
       No change in performance detected.
Found 11 outliers among 100 measurements (11.00%)
11 (11.00%) high severe
decode 4096 bytes, mask 7f: No change in performance detected.
       time:   [5.7984 µs 5.8064 µs 5.8143 µs]
       change: [-0.2455% +0.1464% +0.5713] (p = 0.53 > 0.05)
       No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
decode 1048576 bytes, mask 7f: Change within noise threshold.
       time:   [1.4731 ms 1.4752 ms 1.4773 ms]
       change: [-0.8464% -0.6462% -0.4381] (p = 0.00 < 0.05)
       Change within noise threshold.
decode 4096 bytes, mask 3f: No change in performance detected.
       time:   [5.5395 µs 5.5481 µs 5.5569 µs]
       change: [-0.5777% -0.1358% +0.2166] (p = 0.56 > 0.05)
       No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
decode 1048576 bytes, mask 3f: No change in performance detected.
       time:   [1.4158 ms 1.4216 ms 1.4313 ms]
       change: [-0.1166% +0.3327% +1.0179] (p = 0.34 > 0.05)
       No change in performance detected.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
1-streams/each-1000-bytes/wallclock-time: No change in performance detected.
       time:   [581.87 µs 583.87 µs 586.18 µs]
       change: [-1.0893% -0.5137% +0.0920] (p = 0.08 > 0.05)
       No change in performance detected.
Found 7 outliers among 100 measurements (7.00%)
7 (7.00%) high severe
1-streams/each-1000-bytes/simulated-time: Change within noise threshold.
       time:   [118.74 ms 118.95 ms 119.16 ms]
       thrpt:  [8.1953 KiB/s 8.2097 KiB/s 8.2241 KiB/s]
change:
       time:   [-0.5379% -0.2873% -0.0190] (p = 0.03 < 0.05)
       thrpt:  [+0.0190% +0.2881% +0.5408]
       Change within noise threshold.
1000-streams/each-1-bytes/wallclock-time: No change in performance detected.
       time:   [12.395 ms 12.434 ms 12.474 ms]
       change: [-0.3719% +0.1170% +0.6106] (p = 0.65 > 0.05)
       No change in performance detected.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
1000-streams/each-1-bytes/simulated-time: No change in performance detected.
       time:   [2.3294 s 2.3331 s 2.3368 s]
       thrpt:  [427.94   B/s 428.61   B/s 429.29   B/s]
change:
       time:   [-0.1989% +0.0525% +0.2862] (p = 0.67 > 0.05)
       thrpt:  [-0.2854% -0.0524% +0.1993]
       No change in performance detected.
1000-streams/each-1000-bytes/wallclock-time: Change within noise threshold.
       time:   [50.181 ms 50.291 ms 50.402 ms]
       change: [+0.9579% +1.3048% +1.6117] (p = 0.00 < 0.05)
       Change within noise threshold.
1000-streams/each-1000-bytes/simulated-time: No change in performance detected.
       time:   [16.101 s 16.323 s 16.546 s]
       thrpt:  [59.023 KiB/s 59.829 KiB/s 60.654 KiB/s]
change:
       time:   [-2.1973% -0.1439% +1.9979] (p = 0.90 > 0.05)
       thrpt:  [-1.9588% +0.1441% +2.2466]
       No change in performance detected.
coalesce_acked_from_zero 1+1 entries: No change in performance detected.
       time:   [89.329 ns 89.654 ns 89.975 ns]
       change: [-0.3508% +0.0866% +0.5174] (p = 0.70 > 0.05)
       No change in performance detected.
Found 11 outliers among 100 measurements (11.00%)
10 (10.00%) high mild
1 (1.00%) high severe
coalesce_acked_from_zero 3+1 entries: No change in performance detected.
       time:   [105.88 ns 106.27 ns 106.74 ns]
       change: [-0.3098% +0.2661% +0.8816] (p = 0.41 > 0.05)
       No change in performance detected.
Found 12 outliers among 100 measurements (12.00%)
5 (5.00%) high mild
7 (7.00%) high severe
coalesce_acked_from_zero 10+1 entries: No change in performance detected.
       time:   [105.27 ns 105.70 ns 106.22 ns]
       change: [-0.5029% +0.0838% +0.6371] (p = 0.78 > 0.05)
       No change in performance detected.
Found 12 outliers among 100 measurements (12.00%)
5 (5.00%) low mild
7 (7.00%) high severe
coalesce_acked_from_zero 1000+1 entries: No change in performance detected.
       time:   [90.393 ns 92.839 ns 98.475 ns]
       change: [-0.2439% +4.7438% +14.033] (p = 0.31 > 0.05)
       No change in performance detected.
Found 11 outliers among 100 measurements (11.00%)
3 (3.00%) high mild
8 (8.00%) high severe
RxStreamOrderer::inbound_frame(): Change within noise threshold.
       time:   [108.73 ms 108.90 ms 109.19 ms]
       change: [-1.2799% -1.0968% -0.8395] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 6 outliers among 100 measurements (6.00%)
2 (2.00%) low mild
2 (2.00%) high mild
2 (2.00%) high severe
sent::Packets::take_ranges: No change in performance detected.
       time:   [4.4157 µs 4.5020 µs 4.5740 µs]
       change: [-8.6026% -4.1867% -0.0737] (p = 0.06 > 0.05)
       No change in performance detected.
Found 2 outliers among 100 measurements (2.00%)
2 (2.00%) high mild
transfer/pacing-false/varying-seeds/wallclock-time/run: Change within noise threshold.
       time:   [23.127 ms 23.142 ms 23.157 ms]
       change: [+0.7716% +0.9488% +1.0888] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 3 outliers among 100 measurements (3.00%)
3 (3.00%) high mild
transfer/pacing-false/varying-seeds/simulated-time/run: No change in performance detected.
       time:   [23.941 s 23.941 s 23.941 s]
       thrpt:  [171.09 KiB/s 171.09 KiB/s 171.09 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.
transfer/pacing-true/varying-seeds/wallclock-time/run: Change within noise threshold.
       time:   [23.693 ms 23.710 ms 23.728 ms]
       change: [+1.4881% +1.7092% +1.8617] (p = 0.00 < 0.05)
       Change within noise threshold.
transfer/pacing-true/varying-seeds/simulated-time/run: No change in performance detected.
       time:   [23.676 s 23.676 s 23.676 s]
       thrpt:  [173.01 KiB/s 173.01 KiB/s 173.01 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.
transfer/pacing-false/same-seed/wallclock-time/run: Change within noise threshold.
       time:   [23.369 ms 23.385 ms 23.401 ms]
       change: [+0.1718% +0.3273% +0.4633] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 3 outliers among 100 measurements (3.00%)
3 (3.00%) high mild
transfer/pacing-false/same-seed/simulated-time/run: No change in performance detected.
       time:   [23.941 s 23.941 s 23.941 s]
       thrpt:  [171.09 KiB/s 171.09 KiB/s 171.09 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.
transfer/pacing-true/same-seed/wallclock-time/run: No change in performance detected.
       time:   [23.788 ms 23.808 ms 23.831 ms]
       change: [-0.2069% +0.0137% +0.1779] (p = 0.90 > 0.05)
       No change in performance detected.
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) low mild
1 (1.00%) high severe
transfer/pacing-true/same-seed/simulated-time/run: No change in performance detected.
       time:   [23.676 s 23.676 s 23.676 s]
       thrpt:  [173.01 KiB/s 173.01 KiB/s 173.01 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.

Download data for profiler.firefox.com or download performance comparison data.

@larseggert
Copy link
Collaborator

@mxinden @martinthomson anything left to discuss here? Can this be merged?

@martinthomson
Copy link
Member

I'm of a view that this isn't going to help anything. We don't have constraints on our own ability to handle different MTUs and that is all this signaling can communicate. I'm interested in where @mxinden stands.

@mxinden
Copy link
Member Author

mxinden commented Feb 28, 2026

This is a (hacky) solution to a problem we are facing with websites behind Fastly's CDN through the Fastly proxy, namely for the CDN to start with a too-high MTU.

Agreed with @Propheticus and @martinthomson that this is a hack.

Once we resume the MASQUE project I will give this more thought. For now, marking as draft. Thanks for the input everyone.

@mxinden mxinden marked this pull request as draft February 28, 2026 14:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants