feat(transport): make max_udp_payload_size tparam configurable#3207
feat(transport): make max_udp_payload_size tparam configurable#3207mxinden wants to merge 2 commits intomozilla:mainfrom
Conversation
Some QUIC servers use an initial MTU larger than 1280 bytes. This is problematic when connecting to such servers via a proxy (e.g. MASQUE connect-udp). While most unproxied paths can handle >1280 bytes, some proxied paths cannot. Firefox will need to restrict the max_udp_payload_size to 1232 bytes (i.e. 1232 + 40 (v6) + 8 (UDP)) on proxied connections to support such restrictive paths.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3207 +/- ##
==========================================
- Coverage 93.99% 93.93% -0.06%
==========================================
Files 124 124
Lines 37597 37608 +11
Branches 37597 37608 +11
==========================================
- Hits 35340 35328 -12
- Misses 1392 1412 +20
- Partials 865 868 +3
|
Contains: - mozilla/neqo#3176 - mozilla/neqo#3171 - mozilla/neqo#3207
|
| Branch | param-max-udp |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result nanoseconds (ns) (Result Δ%) | Upper Boundary nanoseconds (ns) (Limit %) |
|---|---|---|---|
| 1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client | 📈 view plot 🚷 view threshold | 203,180,000.00 ns(-2.28%)Baseline: 207,916,450.62 ns | 216,896,744.57 ns (93.68%) |
| 1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client | 📈 view plot 🚷 view threshold | 200,330,000.00 ns(-1.04%)Baseline: 202,440,169.75 ns | 211,911,657.44 ns (94.53%) |
| 1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client | 📈 view plot 🚷 view threshold | 38,627,000.00 ns(+10.55%)Baseline: 34,939,854.94 ns | 46,509,149.03 ns (83.05%) |
| 1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client | 📈 view plot 🚷 view threshold | 281,820,000.00 ns(-2.17%)Baseline: 288,069,691.36 ns | 301,060,092.17 ns (93.61%) |
| 1-streams/each-1000-bytes/simulated-time | 📈 view plot 🚷 view threshold | 118,950,000.00 ns(+0.08%)Baseline: 118,859,675.93 ns | 120,423,236.27 ns (98.78%) |
| 1-streams/each-1000-bytes/wallclock-time | 📈 view plot 🚷 view threshold | 583,870.00 ns(-0.88%)Baseline: 589,041.28 ns | 608,661.92 ns (95.93%) |
| 1000-streams/each-1-bytes/simulated-time | 📈 view plot 🚷 view threshold | 2,333,100,000.00 ns(-71.80%)Baseline: 8,273,568,827.16 ns | 23,011,211,211.58 ns (10.14%) |
| 1000-streams/each-1-bytes/wallclock-time | 📈 view plot 🚷 view threshold | 12,434,000.00 ns(-5.78%)Baseline: 13,196,964.51 ns | 15,049,423.60 ns (82.62%) |
| 1000-streams/each-1000-bytes/simulated-time | 📈 view plot 🚷 view threshold | 16,323,000,000.00 ns(-7.15%)Baseline: 17,580,388,888.89 ns | 20,627,790,640.28 ns (79.13%) |
| 1000-streams/each-1000-bytes/wallclock-time | 📈 view plot 🚷 view threshold | 50,291,000.00 ns(-0.36%)Baseline: 50,471,824.07 ns | 55,233,588.97 ns (91.05%) |
| RxStreamOrderer::inbound_frame() | 📈 view plot 🚷 view threshold | 108,900,000.00 ns(-0.65%)Baseline: 109,613,734.57 ns | 111,420,930.89 ns (97.74%) |
| coalesce_acked_from_zero 1+1 entries | 📈 view plot 🚷 view threshold | 89.65 ns(+0.47%)Baseline: 89.23 ns | 90.60 ns (98.96%) |
| coalesce_acked_from_zero 10+1 entries | 📈 view plot 🚷 view threshold | 105.70 ns(-0.29%)Baseline: 106.01 ns | 107.07 ns (98.72%) |
| coalesce_acked_from_zero 1000+1 entries | 📈 view plot 🚷 view threshold | 92.84 ns(+2.07%)Baseline: 90.96 ns | 95.21 ns (97.51%) |
| coalesce_acked_from_zero 3+1 entries | 📈 view plot 🚷 view threshold | 106.27 ns(-0.23%)Baseline: 106.51 ns | 107.49 ns (98.87%) |
| decode 1048576 bytes, mask 3f | 📈 view plot 🚷 view threshold | 1,421,600.00 ns(-16.86%)Baseline: 1,709,973.15 ns | 2,491,081.15 ns (57.07%) |
| decode 1048576 bytes, mask 7f | 📈 view plot 🚷 view threshold | 1,475,200.00 ns(-66.05%)Baseline: 4,344,901.23 ns | 7,557,291.31 ns (19.52%) |
| decode 1048576 bytes, mask ff | 📈 view plot 🚷 view threshold | 1,163,200.00 ns(-56.20%)Baseline: 2,655,450.00 ns | 4,308,129.84 ns (27.00%) |
| decode 4096 bytes, mask 3f | 📈 view plot 🚷 view threshold | 5,548.10 ns(-21.09%)Baseline: 7,031.15 ns | 10,837.53 ns (51.19%) |
| decode 4096 bytes, mask 7f | 📈 view plot 🚷 view threshold | 5,806.40 ns(-65.77%)Baseline: 16,960.47 ns | 29,503.43 ns (19.68%) |
| decode 4096 bytes, mask ff | 📈 view plot 🚷 view threshold | 4,520.10 ns(-54.92%)Baseline: 10,027.78 ns | 16,139.90 ns (28.01%) |
| sent::Packets::take_ranges | 📈 view plot 🚷 view threshold | 4,502.00 ns(-3.65%)Baseline: 4,672.62 ns | 4,910.19 ns (91.69%) |
| transfer/pacing-false/same-seed/simulated-time/run | 📈 view plot 🚷 view threshold | 23,941,000,000.00 ns(-4.42%)Baseline: 25,047,339,009.29 ns | 26,487,231,770.17 ns (90.39%) |
| transfer/pacing-false/same-seed/wallclock-time/run | 📈 view plot 🚷 view threshold | 23,385,000.00 ns(-6.50%)Baseline: 25,009,817.34 ns | 27,289,661.85 ns (85.69%) |
| transfer/pacing-false/varying-seeds/simulated-time/run | 📈 view plot 🚷 view threshold | 23,941,000,000.00 ns(-3.90%)Baseline: 24,912,263,157.89 ns | 26,104,613,022.61 ns (91.71%) |
| transfer/pacing-false/varying-seeds/wallclock-time/run | 📈 view plot 🚷 view threshold | 23,142,000.00 ns(-7.85%)Baseline: 25,113,065.02 ns | 27,592,088.20 ns (83.87%) |
| transfer/pacing-true/same-seed/simulated-time/run | 📈 view plot 🚷 view threshold | 23,676,000,000.00 ns(-5.47%)Baseline: 25,045,541,795.67 ns | 26,803,225,262.35 ns (88.33%) |
| transfer/pacing-true/same-seed/wallclock-time/run | 📈 view plot 🚷 view threshold | 23,808,000.00 ns(-8.58%)Baseline: 26,043,438.08 ns | 29,160,393.40 ns (81.64%) |
| transfer/pacing-true/varying-seeds/simulated-time/run | 📈 view plot 🚷 view threshold | 23,676,000,000.00 ns(-4.18%)Baseline: 24,708,346,749.23 ns | 25,976,009,560.60 ns (91.15%) |
| transfer/pacing-true/varying-seeds/wallclock-time/run | 📈 view plot 🚷 view threshold | 23,710,000.00 ns(-7.36%)Baseline: 25,593,750.77 ns | 28,197,640.79 ns (84.09%) |
There was a problem hiding this comment.
Pull request overview
This PR adds support for configuring the max_udp_payload_size transport parameter to enable Firefox to restrict UDP payload sizes on proxied QUIC connections (e.g., through MASQUE connect-udp). The change allows setting a custom MTU limit (e.g., 1232 bytes) when the standard 1280-byte minimum is too large for certain proxied network paths.
Key Changes:
- Added optional
max_udp_payload_sizefield toConnectionParametersstruct - Implemented builder method to configure the transport parameter
- Integrated parameter into transport parameters encoding when set
Failed Interop TestsQUIC Interop Runner, client vs. server, differences relative to d070393. All resultsSucceeded Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
Unsupported Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
|
|
Thinking about this more, how does this really help with connect-udp? If the proxy has limits on the size of things it can handle in the server-to-client direction, that might be cause for us to let the server know of those limits, but that's not a limit we are made aware of, is it? Any limit we set should be based on our own limits, which we only have by virtue of knowing the local interface MTU (maybe), because we don't have any real limits on receiving ourselves. In the client-to-server direction, we might benefit from knowing about both server and proxy limits, but those are just things we can use to restrict our MTU; we don't need configuration for that. |
Contains: - mozilla/neqo#3176 - mozilla/neqo#3171 - mozilla/neqo#3207 - mozilla/neqo#3234
Contains: - mozilla/neqo#3176 - mozilla/neqo#3171 - mozilla/neqo#3207 - mozilla/neqo#3234
Contains: - mozilla/neqo#3176 - mozilla/neqo#3171 - mozilla/neqo#3207 - mozilla/neqo#3234
Contains: - mozilla/neqo#3176 - mozilla/neqo#3171 - mozilla/neqo#3207 - mozilla/neqo#3234
|
@mxinden see @martinthomson's comments – anything you want to address in response, or should I click the merge button? |
Contains: - mozilla/neqo#3176 - mozilla/neqo#3171 - mozilla/neqo#3207 - mozilla/neqo#3234
Correct. Not a limit we are aware of. Thus far, we have simply applied this conservative limit to all proxied client<->server connections. This is a trade-off. We can:
Alternatively we could fallback to HTTP CONNECT on paths with an 1280 MTU AND servers that start with an > 1280 MTU. @martinthomson can you think of any other approaches? Would it make sense to advocate major players (e.g. Fastly) to start with an MTU of 1280 and using PMTUD thereafter, instead of starting with a larger initial MTU? |
|
Friendly ping @martinthomson. Do you have thoughts on the above? |
(First of, please let me know if this is annoying and I'll remove and refrain from commenting) Reading the RFC9000 spec section on datagram size, this looks like the right approach. Specifically section 14.2 states
and the section 14.3 explaining DPLPMTUD states
The smallest allowed maximum is 1200. (which is also the minimum, so anything smaller needs padding to this size.) But then again 14.1 on initial size contains
To me it's unclear how or why the sender would assume / believe anything without negotiating or discovering first though.
So yes it would make sense to advocate senders, like Fastly, to start at an MTU of 1280. Using |
High quality contributions, e.g. yours above, are always very welcome. Thanks @Propheticus! |
|
The real challenge here is that servers that start with a too-large MTU will simply fail to connect in many cases. Transport parameters are almost deliberately not available to servers in the first round trip. This is because we take steps to shuffle the TLS ClientHello in ways that make it likely that transport parameters don't appear in the first packet. In that case, anything we say in transport parameters isn't going to help much. In the case where the proxy has symmetric limits (same size limit on up- and down-stream -- a very reasonable assumption, even if not 100% guaranteed), there might be some value in us changing our actions, even if signaling doesn't work. But that is limited to helping servers that echo the client MTU size. Those servers will already work best, so there is some value in encouraging servers to mirror the client packet size in their responses. That would allow us to use information we have about the proxy MTU size limits to affect connection success rates. It also lets us potentially probe for the handshake (first transmission with a high MTU, subsequent/retransmission packets with lower; ACKs of the first confirm that the higher MTU works). I guess that my conclusion is that the transport parameter isn't much help, except in communications with the proxy. It's analogous to knowing the MTU of the associated network interface. It's not everything, but if you make some assumptions you can do a little better than the naive 1200. |
Merging this PR will degrade performance by 8.66%
Performance Changes
Comparing Footnotes |
Client/server transfer resultsPerformance differences relative to 1913e3d. Transfer of 33554432 bytes over loopback, min. 100 runs. All unit-less numbers are in milliseconds.
Table above only shows statistically significant changes. See all results below. All resultsTransfer of 33554432 bytes over loopback, min. 100 runs. All unit-less numbers are in milliseconds.
Download data for |
|
| Branch | param-max-udp |
| Testbed | On-prem |
🚨 1 Alert
| Iteration | Benchmark | Measure Units | View | Benchmark Result (Result Δ%) | Upper Boundary (Limit %) |
|---|---|---|---|---|---|
| 9 | neqo-s2n-cubic | Latency milliseconds (ms) | 📈 plot 🚷 threshold 🚨 alert (🔔) | 224.40 ms(+1.49%)Baseline: 221.11 ms | 224.32 ms (100.03%) |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| google-neqo-cubic | 📈 view plot 🚷 view threshold | 271.69 ms(-1.03%)Baseline: 274.51 ms | 284.07 ms (95.64%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| msquic-neqo-cubic | 📈 view plot 🚷 view threshold | 213.98 ms(+3.43%)Baseline: 206.87 ms | 240.95 ms (88.81%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo-google-cubic | 📈 view plot 🚷 view threshold | 764.82 ms(+0.59%)Baseline: 760.34 ms | 787.66 ms (97.10%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo-msquic-cubic | 📈 view plot 🚷 view threshold | 161.34 ms(+1.46%)Baseline: 159.02 ms | 162.46 ms (99.31%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo-neqo-cubic-nopacing | 📈 view plot 🚷 view threshold | 96.04 ms(-0.30%)Baseline: 96.33 ms | 98.49 ms (97.52%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo-neqo-cubic | 📈 view plot 🚷 view threshold | 97.23 ms(-0.13%)Baseline: 97.35 ms | 99.54 ms (97.67%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo-neqo-newreno-nopacing | 📈 view plot 🚷 view threshold | 95.82 ms(+0.21%)Baseline: 95.62 ms | 97.66 ms (98.12%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo-neqo-newreno | 📈 view plot 🚷 view threshold | 96.41 ms(+0.29%)Baseline: 96.13 ms | 98.03 ms (98.34%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo-quiche-cubic | 📈 view plot 🚷 view threshold | 193.31 ms(+0.46%)Baseline: 192.43 ms | 195.48 ms (98.89%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo-s2n-cubic | 📈 view plot 🚷 view threshold 🚨 view alert (🔔) | 224.40 ms(+1.49%)Baseline: 221.11 ms | 224.32 ms (100.03%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| quiche-neqo-cubic | 📈 view plot 🚷 view threshold | 155.39 ms(+1.16%)Baseline: 153.60 ms | 157.01 ms (98.97%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| s2n-neqo-cubic | 📈 view plot 🚷 view threshold | 174.54 ms(+0.34%)Baseline: 173.96 ms | 176.59 ms (98.84%) |
Benchmark resultsSignificant performance differences relative to 1913e3d. 1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client: 💚 Performance has improved by -2.2668%. time: [202.80 ms 203.18 ms 203.64 ms]
thrpt: [491.06 MiB/s 492.18 MiB/s 493.11 MiB/s]
change:
time: [-2.5999% -2.2668% -1.9562] (p = 0.00 < 0.05)
thrpt: [+1.9953% +2.3194% +2.6693]
Performance has improved.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severeAll results1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client: Change within noise threshold. time: [199.94 ms 200.33 ms 200.78 ms]
thrpt: [498.06 MiB/s 499.17 MiB/s 500.16 MiB/s]
change:
time: [-1.2584% -1.0046% -0.7320] (p = 0.00 < 0.05)
thrpt: [+0.7374% +1.0148% +1.2744]
Change within noise threshold.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client: No change in performance detected. time: [279.65 ms 281.82 ms 284.06 ms]
thrpt: [35.204 Kelem/s 35.483 Kelem/s 35.760 Kelem/s]
change:
time: [-0.8460% +0.1706% +1.2046] (p = 0.75 > 0.05)
thrpt: [-1.1903% -0.1703% +0.8533]
No change in performance detected.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client: No change in performance detected. time: [38.465 ms 38.627 ms 38.804 ms]
thrpt: [25.770 B/s 25.889 B/s 25.998 B/s]
change:
time: [-0.7363% -0.1382% +0.4706] (p = 0.65 > 0.05)
thrpt: [-0.4684% +0.1384% +0.7417]
No change in performance detected.
Found 10 outliers among 100 measurements (10.00%)
3 (3.00%) high mild
7 (7.00%) high severe1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client: 💚 Performance has improved by -2.2668%. time: [202.80 ms 203.18 ms 203.64 ms]
thrpt: [491.06 MiB/s 492.18 MiB/s 493.11 MiB/s]
change:
time: [-2.5999% -2.2668% -1.9562] (p = 0.00 < 0.05)
thrpt: [+1.9953% +2.3194% +2.6693]
Performance has improved.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severedecode 4096 bytes, mask ff: No change in performance detected. time: [4.5129 µs 4.5201 µs 4.5273 µs]
change: [-0.4425% -0.1491% +0.1729] (p = 0.33 > 0.05)
No change in performance detected.
Found 5 outliers among 100 measurements (5.00%)
3 (3.00%) high mild
2 (2.00%) high severedecode 1048576 bytes, mask ff: No change in performance detected. time: [1.1611 ms 1.1632 ms 1.1656 ms]
change: [-0.7592% +0.1965% +1.1590] (p = 0.69 > 0.05)
No change in performance detected.
Found 11 outliers among 100 measurements (11.00%)
11 (11.00%) high severedecode 4096 bytes, mask 7f: No change in performance detected. time: [5.7984 µs 5.8064 µs 5.8143 µs]
change: [-0.2455% +0.1464% +0.5713] (p = 0.53 > 0.05)
No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severedecode 1048576 bytes, mask 7f: Change within noise threshold. time: [1.4731 ms 1.4752 ms 1.4773 ms]
change: [-0.8464% -0.6462% -0.4381] (p = 0.00 < 0.05)
Change within noise threshold.decode 4096 bytes, mask 3f: No change in performance detected. time: [5.5395 µs 5.5481 µs 5.5569 µs]
change: [-0.5777% -0.1358% +0.2166] (p = 0.56 > 0.05)
No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severedecode 1048576 bytes, mask 3f: No change in performance detected. time: [1.4158 ms 1.4216 ms 1.4313 ms]
change: [-0.1166% +0.3327% +1.0179] (p = 0.34 > 0.05)
No change in performance detected.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe1-streams/each-1000-bytes/wallclock-time: No change in performance detected. time: [581.87 µs 583.87 µs 586.18 µs]
change: [-1.0893% -0.5137% +0.0920] (p = 0.08 > 0.05)
No change in performance detected.
Found 7 outliers among 100 measurements (7.00%)
7 (7.00%) high severe1-streams/each-1000-bytes/simulated-time: Change within noise threshold. time: [118.74 ms 118.95 ms 119.16 ms]
thrpt: [8.1953 KiB/s 8.2097 KiB/s 8.2241 KiB/s]
change:
time: [-0.5379% -0.2873% -0.0190] (p = 0.03 < 0.05)
thrpt: [+0.0190% +0.2881% +0.5408]
Change within noise threshold.1000-streams/each-1-bytes/wallclock-time: No change in performance detected. time: [12.395 ms 12.434 ms 12.474 ms]
change: [-0.3719% +0.1170% +0.6106] (p = 0.65 > 0.05)
No change in performance detected.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild1000-streams/each-1-bytes/simulated-time: No change in performance detected. time: [2.3294 s 2.3331 s 2.3368 s]
thrpt: [427.94 B/s 428.61 B/s 429.29 B/s]
change:
time: [-0.1989% +0.0525% +0.2862] (p = 0.67 > 0.05)
thrpt: [-0.2854% -0.0524% +0.1993]
No change in performance detected.1000-streams/each-1000-bytes/wallclock-time: Change within noise threshold. time: [50.181 ms 50.291 ms 50.402 ms]
change: [+0.9579% +1.3048% +1.6117] (p = 0.00 < 0.05)
Change within noise threshold.1000-streams/each-1000-bytes/simulated-time: No change in performance detected. time: [16.101 s 16.323 s 16.546 s]
thrpt: [59.023 KiB/s 59.829 KiB/s 60.654 KiB/s]
change:
time: [-2.1973% -0.1439% +1.9979] (p = 0.90 > 0.05)
thrpt: [-1.9588% +0.1441% +2.2466]
No change in performance detected.coalesce_acked_from_zero 1+1 entries: No change in performance detected. time: [89.329 ns 89.654 ns 89.975 ns]
change: [-0.3508% +0.0866% +0.5174] (p = 0.70 > 0.05)
No change in performance detected.
Found 11 outliers among 100 measurements (11.00%)
10 (10.00%) high mild
1 (1.00%) high severecoalesce_acked_from_zero 3+1 entries: No change in performance detected. time: [105.88 ns 106.27 ns 106.74 ns]
change: [-0.3098% +0.2661% +0.8816] (p = 0.41 > 0.05)
No change in performance detected.
Found 12 outliers among 100 measurements (12.00%)
5 (5.00%) high mild
7 (7.00%) high severecoalesce_acked_from_zero 10+1 entries: No change in performance detected. time: [105.27 ns 105.70 ns 106.22 ns]
change: [-0.5029% +0.0838% +0.6371] (p = 0.78 > 0.05)
No change in performance detected.
Found 12 outliers among 100 measurements (12.00%)
5 (5.00%) low mild
7 (7.00%) high severecoalesce_acked_from_zero 1000+1 entries: No change in performance detected. time: [90.393 ns 92.839 ns 98.475 ns]
change: [-0.2439% +4.7438% +14.033] (p = 0.31 > 0.05)
No change in performance detected.
Found 11 outliers among 100 measurements (11.00%)
3 (3.00%) high mild
8 (8.00%) high severeRxStreamOrderer::inbound_frame(): Change within noise threshold. time: [108.73 ms 108.90 ms 109.19 ms]
change: [-1.2799% -1.0968% -0.8395] (p = 0.00 < 0.05)
Change within noise threshold.
Found 6 outliers among 100 measurements (6.00%)
2 (2.00%) low mild
2 (2.00%) high mild
2 (2.00%) high severesent::Packets::take_ranges: No change in performance detected. time: [4.4157 µs 4.5020 µs 4.5740 µs]
change: [-8.6026% -4.1867% -0.0737] (p = 0.06 > 0.05)
No change in performance detected.
Found 2 outliers among 100 measurements (2.00%)
2 (2.00%) high mildtransfer/pacing-false/varying-seeds/wallclock-time/run: Change within noise threshold. time: [23.127 ms 23.142 ms 23.157 ms]
change: [+0.7716% +0.9488% +1.0888] (p = 0.00 < 0.05)
Change within noise threshold.
Found 3 outliers among 100 measurements (3.00%)
3 (3.00%) high mildtransfer/pacing-false/varying-seeds/simulated-time/run: No change in performance detected. time: [23.941 s 23.941 s 23.941 s]
thrpt: [171.09 KiB/s 171.09 KiB/s 171.09 KiB/s]
change:
time: [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
thrpt: [+0.0000% +0.0000% +0.0000]
No change in performance detected.transfer/pacing-true/varying-seeds/wallclock-time/run: Change within noise threshold. time: [23.693 ms 23.710 ms 23.728 ms]
change: [+1.4881% +1.7092% +1.8617] (p = 0.00 < 0.05)
Change within noise threshold.transfer/pacing-true/varying-seeds/simulated-time/run: No change in performance detected. time: [23.676 s 23.676 s 23.676 s]
thrpt: [173.01 KiB/s 173.01 KiB/s 173.01 KiB/s]
change:
time: [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
thrpt: [+0.0000% +0.0000% +0.0000]
No change in performance detected.transfer/pacing-false/same-seed/wallclock-time/run: Change within noise threshold. time: [23.369 ms 23.385 ms 23.401 ms]
change: [+0.1718% +0.3273% +0.4633] (p = 0.00 < 0.05)
Change within noise threshold.
Found 3 outliers among 100 measurements (3.00%)
3 (3.00%) high mildtransfer/pacing-false/same-seed/simulated-time/run: No change in performance detected. time: [23.941 s 23.941 s 23.941 s]
thrpt: [171.09 KiB/s 171.09 KiB/s 171.09 KiB/s]
change:
time: [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
thrpt: [+0.0000% +0.0000% +0.0000]
No change in performance detected.transfer/pacing-true/same-seed/wallclock-time/run: No change in performance detected. time: [23.788 ms 23.808 ms 23.831 ms]
change: [-0.2069% +0.0137% +0.1779] (p = 0.90 > 0.05)
No change in performance detected.
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) low mild
1 (1.00%) high severetransfer/pacing-true/same-seed/simulated-time/run: No change in performance detected. time: [23.676 s 23.676 s 23.676 s]
thrpt: [173.01 KiB/s 173.01 KiB/s 173.01 KiB/s]
change:
time: [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
thrpt: [+0.0000% +0.0000% +0.0000]
No change in performance detected.Download data for |
|
@mxinden @martinthomson anything left to discuss here? Can this be merged? |
|
I'm of a view that this isn't going to help anything. We don't have constraints on our own ability to handle different MTUs and that is all this signaling can communicate. I'm interested in where @mxinden stands. |
|
This is a (hacky) solution to a problem we are facing with websites behind Fastly's CDN through the Fastly proxy, namely for the CDN to start with a too-high MTU. Agreed with @Propheticus and @martinthomson that this is a hack. Once we resume the MASQUE project I will give this more thought. For now, marking as draft. Thanks for the input everyone. |
Some QUIC servers use an initial MTU larger than 1280 bytes. This is problematic when connecting to such servers via a proxy (e.g. MASQUE connect-udp). While most unproxied paths can handle >1280 bytes, some proxied paths cannot.
Firefox will need to restrict the max_udp_payload_size to 1232 bytes (i.e. 1232 + 40 (v6) + 8 (UDP)) on proxied connections to support such restrictive paths.