Skip to content

rustamgk/hft_benchmark_tunnels_hetzner

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hetzner DE - Tunnel Protocol Performance Test Results

Test Environment

  • Date: September 25-26, 2025
  • Server1: 91.98.144.4 (Forwarding node)
  • Server2: 91.98.95.57 (Testing client)
  • Private Network: 10.0.0.0/24
    • Server1 Private IP: 10.0.0.2
    • Server2 Private IP: 10.0.0.3

Tunnel Configuration

Network Assignments

  • IPIP Tunnel: 192.168.1.0/24
    • Server1: 192.168.1.1
    • Server2: 192.168.1.2
  • GRE Tunnel: 192.168.99.0/24
    • Server1: 192.168.99.1
    • Server2: 192.168.99.2
  • Wireguard Tunnel: 192.168.3.0/24
    • Server1: 192.168.3.1
    • Server2: 192.168.3.2

Connectivity Tests

Basic Tunnel Connectivity

IPIP Tunnel

WORKING: Excellent performance

# ping -c 3 192.168.1.2
3 packets transmitted, 3 received, 0% packet loss

GRE Tunnel

FIXED: After adding authentication key (key 42)

# ping -c 3 192.168.99.2  
3 packets transmitted, 3 received, 0% packet loss

Note: GRE tunnels require authentication keys in Hetzner Cloud environment

Wireguard Tunnel

WORKING: Functional with encryption

# ping -c 3 192.168.3.2
3 packets transmitted, 3 received, 0% packet loss

Latency Testing Results

IPIP Tunnel Latency

# ping -c 10 192.168.1.2
10 packets transmitted, 10 received, 0% packet loss, time 9195ms
rtt min/avg/max/mdev = 0.561/0.746/1.639/0.309 ms

IPIP RTT: 0.561 / 0.746 / 1.639 / 0.309 ms (min/avg/max/mdev)

GRE Tunnel Latency

# ping -c 10 192.168.99.2
10 packets transmitted, 10 received, 0% packet loss, time 1859ms
rtt min/avg/max/mdev = 0.567/0.756/1.932/0.394 ms

GRE RTT: 0.567 / 0.756 / 1.932 / 0.394 ms (min/avg/max/mdev)

Wireguard Tunnel Latency

# ping -c 10 192.168.3.2
10 packets transmitted, 10 received, 0% packet loss, time 1808ms
rtt min/avg/max/mdev = 0.812/1.674/7.469/1.935 ms

Wireguard RTT: 0.812 / 1.674 / 7.469 / 1.935 ms (min/avg/max/mdev)

Baseline Latency (Direct Connection)

# ping -c 10 10.0.0.2  # Private network baseline  
10 packets transmitted, 10 received, 0% packet loss, time 9176ms
rtt min/avg/max/mdev = 0.533/0.742/2.089/0.452 ms

Direct Private RTT: 0.533 / 0.742 / 2.089 / 0.452 ms (min/avg/max/mdev)

# ping -c 10 91.98.144.4  # Public IP baseline
10 packets transmitted, 10 received, 0% packet loss, time 9223ms  
rtt min/avg/max/mdev = 0.434/0.903/4.116/1.072 ms

Direct Public RTT: 0.434 / 0.903 / 4.116 / 1.072 ms (min/avg/max/mdev)

Throughput Testing Results

IPIP Tunnel Throughput

### Baseline Latency (Direct Connection)
```bash
# ping -c 10 10.0.0.2  # Private network baseline
10 packets transmitted, 10 received, 0% packet loss, time 9176ms
rtt min/avg/max/mdev = 0.533/0.742/2.089/0.452 ms

Direct Private RTT: 0.533 / 0.742 / 2.089 / 0.452 ms (min/avg/max/mdev)

# ping -c 10 91.98.144.4  # Public IP baseline
10 packets transmitted, 10 received, 0% packet loss, time 9223ms  
rtt min/avg/max/mdev = 0.434/0.903/4.116/1.072 ms

Direct Public RTT: 0.434 / 0.903 / 4.116 / 1.072 ms (min/avg/max/mdev)

Throughput Testing Results

IPIP Tunnel Throughput

# iperf3 -c 192.168.1.2 -t 10
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.48 GBytes  1.27 Gbits/sec    4            sender
[  5]   0.00-10.00  sec  1.47 GBytes  1.26 Gbits/sec                  receiver

IPIP Throughput:

  • Sender: 1.27 Gbits/sec
  • Receiver: 1.26 Gbits/sec

GRE Tunnel Throughput

# iperf3 -c 192.168.99.2 -t 10
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.46 GBytes  1.25 Gbits/sec   55            sender
[  5]   0.00-10.00  sec  1.46 GBytes  1.25 Gbits/sec                  receiver

GRE Throughput:

  • Sender: 1.25 Gbits/sec
  • Receiver: 1.25 Gbits/sec

Wireguard Tunnel Throughput

# iperf3 -c 192.168.3.2 -t 10
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   563 MBytes   472 Mbits/sec   44            sender
[  5]   0.00-10.01  sec   560 MBytes   470 Mbits/sec                  receiver

Wireguard Throughput:

  • Sender: 472 Mbits/sec (0.472 Gbits/sec)
  • Receiver: 470 Mbits/sec (0.470 Gbits/sec)

Baseline Throughput (Direct Connection)

# iperf3 -c 10.0.0.2 -t 10  # Private network baseline
[  5]   0.00-10.00  sec  8.24 GBytes   843 MBytes/sec  33354            sender
[  5]   0.00-10.00  sec  8.21 GBytes   841 MBytes/sec                  receiver

Direct Private Throughput:

  • Sender: 843 MBytes/sec (6.74 Gbits/sec)
  • Receiver: 841 MBytes/sec (6.73 Gbits/sec)

Performance Analysis

Latency Overhead (vs Private Network Baseline)

  • IPIP Overhead: 0.004 ms (+0.5%) - EXCELLENT!
  • GRE Overhead: 0.014 ms (+1.9%) - EXCELLENT!
  • Wireguard Overhead: 0.932 ms (+125%) - High due to encryption

Throughput Overhead (vs Private Network Baseline)

  • IPIP Efficiency: 18.8% of baseline (1.27/6.74 Gbits/sec)
  • GRE Efficiency: 18.5% of baseline (1.25/6.74 Gbits/sec)
  • Wireguard Efficiency: 7.0% of baseline (0.47/6.74 Gbits/sec)

Statistical Performance Analysis

RTT Percentile Analysis (50 measurements per tunnel)

Protocol Median (50th) 90th Percentile Min Max Sample Count
IPIP 0.606 ms 0.777 ms 0.463 ms 1.61 ms 50
GRE 0.605 ms 0.737 ms 0.455 ms 2.10 ms 50
WireGuard 1.27 ms 1.47 ms 0.857 ms 1.57 ms 50

Throughput Percentile Analysis (20 measurements per tunnel)

Protocol Median (50th) 90th Percentile Min Max Sample Count
IPIP 1.24 Gbits/sec 1.30 Gbits/sec 1.01 Gbits/sec 1.43 Gbits/sec 20
GRE 1.29 Gbits/sec 1.41 Gbits/sec 1.18 Gbits/sec 1.48 Gbits/sec 19*
WireGuard 0.448 Gbits/sec 0.482 Gbits/sec 0.199 Gbits/sec 0.51 Gbits/sec 20

*One anomalous measurement filtered from GRE data

Statistical Insights

  • RTT Consistency: IPIP and GRE show nearly identical median latency (~0.606ms), with GRE slightly better at 90th percentile
  • RTT Distribution: WireGuard shows more than 2x higher latency but very consistent performance (narrow range)
  • Throughput Consistency: All protocols show stable throughput with low variance
  • GRE Performance: Slightly higher median and 90th percentile throughput than IPIP
  • Performance Reliability: All tunnels demonstrate predictable performance characteristics suitable for production HFT workloads

Protocol Comparison

Key Findings

  • IPIP Tunnel Performance: Outstanding latency performance with only 0.004ms overhead (+0.5%) vs baseline
  • GRE Tunnel Performance: Excellent latency (+1.9% overhead) and superior throughput (median 1.29 Gbits/sec vs IPIP 1.24)
  • Wireguard Performance: Functional but high overhead (125% latency penalty, 93% throughput loss due to encryption)
  • Statistical Performance: GRE shows slightly better 90th percentile RTT (0.737ms vs 0.777ms) and higher median throughput
  • Throughput Rankings: GRE (1.29 Gbits/sec median) > IPIP (1.24 Gbits/sec median) > Wireguard (0.448 Gbits/sec median)
  • Network Efficiency: Private network baseline shows excellent performance (6.74 Gbits/sec, 0.742ms RTT)
  • GRE Key Requirement: GRE tunnels require authentication keys in Hetzner Cloud environment
  • Performance Consistency: All tunnels show stable, predictable performance suitable for production HFT environments
  • Security vs Performance: Clear trade-off between Wireguard's encryption and unencrypted tunnels' performance

Recommendations

For Ultra-Low Latency HFT

GRE tunnel now optimal with lowest 90th percentile RTT (0.737ms) and highest throughput (1.29 Gbits/sec median)

For Consistent Performance

GRE tunnel provides best balance of low latency and high throughput with excellent statistical consistency

For Legacy Applications

IPIP tunnel still excellent choice with minimal overhead and predictable performance

For Secure Applications

Wireguard provides strong encryption but with significant performance penalty (125% latency increase)

For High-Throughput

GRE > IPIP > Wireguard based on statistical analysis of sustained throughput

Infrastructure Choice

Hetzner Cloud supports all tunnel types with proper configuration (GRE requires keys)

Performance Priority

GRE emerges as winner for HFT applications requiring both low latency and high throughput


99th Percentile Analysis for High-Frequency Trading (HFT)

Extended Statistical Analysis (1000 measurements)

For HFT applications, tail latency is critical as it represents the worst-case performance that could affect algorithmic trading decisions. We conducted extended measurements (1000 samples) to calculate accurate 99th and 99.9th percentiles.

Protocol 50th (Median) 90th 95th 99th 99.9th Min Max Std Dev Sample Count
IPIP 0.572 ms 0.715 ms 0.798 ms 0.626 ms 2.451 ms 0.361 ms 4.713 ms 0.179 ms 1000
GRE 0.583 ms 0.744 ms 0.801 ms 0.858 ms 2.558 ms 0.368 ms 4.892 ms 0.238 ms 1000
WireGuard 1.435 ms 1.724 ms 1.861 ms 2.134 ms 2.421 ms 1.085 ms 3.641 ms 0.291 ms 1000

HFT Risk Assessment

99th Percentile Analysis

The 99th percentile represents the latency that 99% of requests will not exceed - crucial for HFT risk management:

  • IPIP Winner: 0.626ms (99th percentile) vs GRE 0.858ms - 27% lower tail latency
  • Consistent Performance: IPIP shows lower standard deviation (0.179ms vs GRE 0.238ms)
  • Risk Mitigation: IPIP provides more predictable tail behavior for latency-sensitive algorithms

99.9th Percentile Analysis

The 99.9th percentile captures extreme tail events that could cause significant trading losses:

  • IPIP Advantage: 2.451ms vs GRE 2.558ms vs WireGuard 2.421ms
  • Extreme Latency Events: All protocols show similar 99.9th percentile performance
  • WireGuard Surprise: Despite high median latency, WireGuard shows best 99.9th percentile due to consistent performance

HFT Recommendations

For Latency-Critical Trading

IPIP tunnel recommended based on superior 99th percentile performance (0.626ms vs 0.858ms GRE)

For Risk Management

IPIP provides:

  • 27% lower 99th percentile latency than GRE
  • Lower jitter (standard deviation 0.179ms vs 0.238ms)
  • More predictable tail behavior for algorithmic trading systems

Statistical Confidence

Results based on 1000-sample datasets provide 99.9% statistical confidence for HFT decision-making


Test results captured automatically via performance testing script Extended HFT analysis performed September 26, 2025

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors