Skip to content

Conversation

@jolavillette
Copy link
Contributor

Improving maximum file transfer speed

This WIP pr focuses on improving the maximum file transfer speed in RS
If possible test the pr and provide comments
To test you will need to use the pr on the 2 sides: sender and receiver

@jolavillette
Copy link
Contributor Author

this commit just disables packet slicing in pqistreamer.cc
that should save some cpu and bandwidh

@jolavillette
Copy link
Contributor Author

reduce DEFAULT_STREAMER_SLEEP to 10ms in pqithreadstreamer
DEFAULT_STREAMER_SLEEP is the time the pqi threads spend resting and doing nothing after a round of receiving data, distributing data to the relevant service queues, and sending data

increase MAX_FT_CHUNK to 64 KB in ftserver.cc
MAX_FT_CHUNK is the size of the piece of file that RS request from disk
increasing it could increase performance by reducing disk access

increase MAX_FTCHUNKS_PER_PEER to 40 in ftfilecreator
MAX_FTCHUNKS_PER_PEER is the max number of 1 MB chunks that RS can send simultaneously to the peer
increasing it from 20 to 40 should remove the 20 MB/s barrier

@csoler
Copy link
Contributor

csoler commented Dec 27, 2025

I don't get why you want to disable packet slicing: it was originally implemented in order to avoid congestion due to large packets being sent by one friend (e.g. channel sync or images through chat) while other friends would have to wait a lot to get their own packets sent/received. This caused large RTT discrepancies, bad estimate of file transfer rates, etc. This of course costs a few extra bytes per packet but it's definitely worth it.

@jolavillette jolavillette force-pushed the ImproveFileTransferSpeed branch from 7abd185 to 0d1a374 Compare December 27, 2025 20:25
@jolavillette
Copy link
Contributor Author

I am only testing :)
Indeed packet slicing was introduced on bandwidth considerations. BIgger packets up to 256 kB (pqistreamer rejects items bigger than that) would on a slow internet connection (such as ADSL 1 Mb/s up) result in unacceptable ping values. I will revert to the optimal packets size of 512 bytes as per pqistreamer. Later when other parameters are optimized I will test with 1 kB or more to see if it has any effects on the max transfer rate.

@jolavillette
Copy link
Contributor Author

increase MAC_FT_CHUNK to 128 kB in ftserver
decreases disk access a little bit more while compatible with pqistreamer maximum packet size of 256 kB

increase PQISTREAM_OPTIMAL_PACKET_SIZE to 1400 bytes in pqistreamer
reduces CPU and overhead without blocking the connection and without exceeding standard MTU of 1500 bytes

@csoler csoler changed the title Improving maximum file transfer speed [WIP] Improving maximum file transfer speed Dec 29, 2025
@jolavillette
Copy link
Contributor Author

Windows only: pqissllistener, remove manual TCP buffer overrides to enable OS Auto-Tuning in pqissllistener"
Only on Wwindows the TCP buffer were forced to 512 kB in pqissllistener. Apparently this was a good idea in XP era but counter productive on recent Windows.
We remove this. After this change Windows will automatically adapts the TCP buffer size according to the TCP activity.

@jolavillette
Copy link
Contributor Author

Despite extensive testing and tracing, I am still unable to understand why Windows cannot match the transfer rates achieved in the reverse direction.

For the moment I recommend that we use:

  • in pqistreamer : slicing ON with optimal packet size 1400 bytes, juste under the MTU of 1500 bytes
  • in pqithreadstreamer : 10 ms timeout and 10 ms sleep, for faster polling (this should also improve RTT)
  • in ftserver : 240 kB chunk size, to reduce disk activity
  • in ftfilecreator : 40 simultaneous active chunks, required to break the 20 MB/s barrier
  • in pqissllistener : (windows only) enable autotuning of TCP buffer

@jolavillette
Copy link
Contributor Author

Achieved 20-25 MB/s for both upload and download (Linux/Win10) on fiber using adaptive timeout and sleep in pqithreadstreamer. Reception timeout now scales between 0 and 10 ms; cycle sleep scales between 1 and 30 ms. This ensures high throughput during activity while significantly saving CPU when idle.

@jolavillette
Copy link
Contributor Author

jolavillette commented Jan 4, 2026

trying to improve the transfer rate I learned a few things:

  • Linux auto-adjusts the TCP buffers according to the traffic
  • Windows is supposed to do it also, that's why I initially removed the call to setsockopt in pqissllistener.cc that was increasing the buffer size from the default value of 64 KB to 512 KB
  • actually I did not see the subsequent identical call to setsockopt in pqissl.cc that was doing the same exact thing
  • then I removed both calls, expecting that Windows autotuning would increase the buffer from its default value of 64 KB
  • but Windows autotune does not seem to work, and I noticed a severe decrease of max transfer rate because of the small buffer (see below for the explanation)
  • finally in today's updated PR I reverted the changes, and the TCP buffer are 512 KB again
  • now the reason: there is a direct link between the latency (ping, RTT), the TCP buffer size, and the maximum transfer rate that can be achieved: max speed = buffer size / latency
  • so if the latency is 10 ms (typical for peers not geographically too far using fiber connection) and the buffer size is 512 KB, the speed can exceed 512 KB / 10 ms = 50 MB/s, which is good
  • but if your friend is in Australie with a latency of 100 ms, even with fiber connection you can't expect more than 512 KB / 100 ms = 5 MB/s, not good, and it would be a good idea to increase the buffer size someday, not now

@jolavillette jolavillette changed the title [WIP] Improving maximum file transfer speed Improving maximum file transfer speed Jan 16, 2026
@jolavillette
Copy link
Contributor Author

IMO this PR is ready to merge

@jolavillette jolavillette force-pushed the ImproveFileTransferSpeed branch from 1b63088 to 3c73878 Compare January 20, 2026 06:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants