Skip to content

Add HAProxy PROXY Protocol v2 support#399

Merged
SChernykh merged 3 commits intoSChernykh:masterfrom
KingJulien1st:feature/haproxy-protocol-v2
Feb 22, 2026
Merged

Add HAProxy PROXY Protocol v2 support#399
SChernykh merged 3 commits intoSChernykh:masterfrom
KingJulien1st:feature/haproxy-protocol-v2

Conversation

@KingJulien1st
Copy link
Copy Markdown
Contributor

I run p2pool behind an HAProxy cluster for load balancing and failover. With plain TCP proxying, two problems appeared: all incoming connections looked like they came from a single IP address, which caused the P2P layer to block legitimate peers, and the TCP handshake behavior was broken in a way that disrupted peer connectivity. Without digging too deep into the root cause, I went straight to the source and added PROXY Protocol v2 support.

What this PR does
It adds a new --proxy-protocol flag. When enabled, p2pool reads the PROXY Protocol v2 header sent by HAProxy (send-proxy-v2) before processing any connection. The real client IP — both IPv4 and IPv6 — is extracted and used for all further processing, including ban checks and logging. Connections using the LOCAL command (e.g. HAProxy health checks) are handled correctly by keeping the original address.

Current testing status:
I am running this in production on v4.13 and so far shares are submitting without issues, P2P peers are connecting correctly, and miners display their real IP addresses. That said, I have not yet tested at log levels higher than 3 and no automated tests have been written. I want to be upfront about that.

Please let me know if you think this is useful enough to be included upstream. If there is interest, I am happy to rebase onto master, add more testing, or adjust the implementation based on your feedback.

@SChernykh
Copy link
Copy Markdown
Owner

P2Pool and XMRig do support failover (both P2Pool -> several Monero nodes and XMRig -> several P2Pool nodes), but it's not as flexible (once connected, P2Pool and XMRig stick to nodes they connected to until the connection fails). So yes, it can be useful for some large-scale setups.

Yes, please rebase, and it's probably better to find another more descriptive name for the new parameter, not just "proxy protocol".

@SChernykh
Copy link
Copy Markdown
Owner

Also, it should be possible to configure Stratum and P2P proxy settings separately. P2P connections usually don't need load balancing, it's very low traffic.

@KingJulien1st KingJulien1st force-pushed the feature/haproxy-protocol-v2 branch from 03b7c2f to 6d2c7f8 Compare February 20, 2026 23:08
@KingJulien1st
Copy link
Copy Markdown
Contributor Author

Thanks for the feedback! I've rebased onto current master and split the flag into separate --stratum-proxy-protocol and --p2p-proxy-protocol options as you suggested — makes much more sense to configure them independently.

Tested in production with HAProxy, both IPv4 and IPv6 are working correctly. Let me know if there's anything else you'd like adjusted!

@KingJulien1st
Copy link
Copy Markdown
Contributor Author

Regarding the parameter name: I thought about this quite a bit, but couldn't come up with something that's both descriptive and practical. Looking at other applications, flags like this are usually either kept short with good documentation, or the protocol is detected automatically with an --allow-proxy-ip=<ADDRESS> option in the configuration.

I'd personally prefer to avoid auto-detection here — even with an IP whitelist approach, it adds complexity: you'd need to manage and maintain the trusted proxy IPs, and in dynamic environments (e.g. cloud load balancers with changing IPs) that becomes a maintenance burden. An explicit opt-in flag is simpler, harder to misconfigure, and makes it immediately obvious in the process arguments that PROXY Protocol is active.

Anything more descriptive than --stratum-proxy-protocol / --p2p-proxy-protocol quickly becomes unwieldy — something like --incoming-tcp-via-proxy-protocol is technically accurate but too verbose to be user-friendly. The current names also deliberately stay vendor-neutral: PROXY Protocol is an open standard supported by HAProxy, nginx, Traefik, AWS ELB and others, so tying the flag name to HAProxy specifically didn't feel right.

I'm open to suggestions if you have something in mind, but I think the current names strike a reasonable balance between clarity and usability.

Comment thread src/tcp_server.cpp
Comment thread src/tcp_server.cpp
Comment thread src/tcp_server.cpp
Comment thread src/tcp_server.cpp Outdated
Comment thread src/main.cpp
@SChernykh
Copy link
Copy Markdown
Owner

I'm open to suggestions if you have something in mind, but I think the current names strike a reasonable balance between clarity and usability.

The current names are fine, I can't think of anything better right now.

   - Validate addr_len <= 36 to prevent buffer overflows from user-controlled input
   - Check exact address data lengths (12 for IPv4, 36 for IPv6) instead of minimums
   - Add HeaderReceived state instead of resetting to None after parsing
   - Document --stratum-proxy-protocol and --p2p-proxy-protocol in COMMAND_LINE.MD
@SChernykh
Copy link
Copy Markdown
Owner

SChernykh commented Feb 22, 2026

I'm not very familiar with the proxy protocol fine details, so I gave AI a go at reviewing it, and here's what it found. All issues seem to be legit ones.

Bug 1 — Spec violation: exact addr_len checks reject TLV-extended headers (High, introduced by Patch 2)

PROXY Protocol v2 spec §2.2 explicitly states that senders may append TLV (Type-Length-Value) extension vectors after the address block, and their size is counted in addr_len. Real-world HAProxy configurations that trigger this include:
send-proxy-v2-ssl — adds a TLS TLV block (type 0x20) with sub-TLVs for version, CN, etc.
send-proxy-v2-ssl-cn — same but with the CN populated
unique-id-format + send-proxy-v2 — adds a unique-ID TLV (type 0x05)
send-proxy-v2-options crc32c — adds a CRC32c TLV (type 0x03)
Any of these would produce e.g. addr_len = 28 for IPv4+SSL (12 + 16 TLV bytes), causing the connection to be dropped with a warning log. The author claims to run this in production with HAProxy but apparently without TLS offloading at the HAProxy level — which is a narrow test case. The Patch 1 minimum checks (addr_len < 12, addr_len < 36) were spec-correct; SChernykh's suggestion to make them exact was wrong, and the author should have pushed back.
The addr_len > 36 upper-bound check also suffers from the same issue — it blocks all TLV usage on IPv6 headers entirely. The correct guard is not against a fixed ceiling but against the actual read buffer size:

const uint32_t total_header_len = 16 + static_cast<uint32_t>(addr_len);
if (total_header_len > m_readBufSize) {
...
}

Bug 2 — Missing m_finished guard before deferred on_connect() call (Medium)

In the non-proxy path of on_new_client, the code guards against server shutdown:

TCPServer* owner = client->m_owner;
if (owner->m_finished.load()) {
    client->close();
    return;
}
if (!client->on_connect()) { ... }

In the proxy path, this check is skipped entirely. on_connect() is only called later, deep inside on_proxy_protocol, without ever checking m_finished

Bug 3 — Transport protocol nibble of p[13] never validated (Low)

p[13] encodes address family (high nibble) and transport protocol (low nibble): 0x0 = UNSPEC, 0x1 = STREAM/TCP, 0x2 = DGRAM/UDP. For p2pool, only STREAM connections make sense. A PROXY header claiming AF_INET/DGRAM (byte value 0x12) would have family 1 extracted, pass all checks, and silently treat a UDP-origin address as a valid TCP miner connection. The transport nibble should be validated: for command == 1, reject if (family_proto & 0x0F) != 1.

Bug 4 — HeaderReceived state is set but never guarded in the UV read callback (Minor/fragile)

After the header is parsed, m_proxyProtocolState is set to HeaderReceived. Subsequent incoming data is routed as follows in the UV callback:

if (m_proxyProtocolState == ProxyProtocolState::ExpectingHeader) {
    on_proxy_protocol(...);  // false → no
}
else if (m_socks5ProxyState == Socks5ProxyState::Default) {
    on_read(...);  // true → falls through here
}

This works correctly only because incoming connections never enter the SOCKS5 state machine (SOCKS5 is only for outgoing connections). HeaderReceived is thus handled by falling through to the SOCKS5-default branch rather than by being explicitly checked.

Bug 5 — No trusted-proxy IP whitelist (Security / Design)

Any peer that can connect to the port can send an arbitrary PROXY header and claim any source IP, defeating the ban system. The author explicitly chose this design (no whitelisting), which is a reasonable trade-off for dynamic cloud environments. But it is worth documenting clearly, because the security model of ban checks is now entirely dependent on network-level access control (firewall rules restricting the Stratum/P2P ports to HAProxy IPs only). If the ports are publicly reachable, the ban system is trivially bypassable.


Bugs 1-4 should be fixed, bug 5 is not a bug, but it needs to be documented.

Copy link
Copy Markdown
Owner

@SChernykh SChernykh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my other comment

@KingJulien1st
Copy link
Copy Markdown
Contributor Author

Thanks for the thorough review! Here's my take on each point:

Bug 1 — Spec violation: exact addr_len checks reject TLV-extended headers (High, introduced by Patch 2)

Yes, that's exactly why I originally used addr_len < 12 instead of != 12, since the memmove at the end skips the total header either way, it doesn't really matter what's after the address block. I changed it to exact checks following your requirement.. since p2pool has its own TCP protocol and I wouldn't expect anyone to use send-proxy-v2-ssl, and if they do, I'd expect them to realize that P2P connections (client side) don't support TLS/SSL anyway (you'd only use -ssl/-ssl-cn for SSL offloading or mTLS via the proxy). But to be spec-correct, we should revert to < 12 / < 36 and replace the > 36 upper bound with a buffer size guard.

Bug 2 — Missing m_finished guard before deferred on_connect() call (Medium)

Legitimate, but highly unlikely to actually occur, as you mentioned, P2P "is very low traffic", and the time window between header-receive and on_connect() is somewhere in microseconds. But since it's just ~two lines of code...

Bug 3 — Transport protocol nibble of p[13] never validated (Low)

P2pool uses its own TCP protocol and we don't validate whether it's actually a Stratum/P2P connection or something else, we're only extracting the source IP. Whether the proxy header says UDP or TCP is irrelevant because the actual connection needs to be TCP (otherwise it wouldn't have ended up there). Manipulating this field also wouldn't give attackers any advantage. I'd classify this as over-engineering.

Bug 4 — HeaderReceived state is set but never guarded in the UV read callback (Minor/fragile)

HeaderReceived and None behave identically in the read dispatch, both fall into the m_socks5ProxyState == Default branch, which is always true for incoming connections. This is not a coincidence, it's by design: incoming connections never use SOCKS5. An explicit check for HeaderReceived would duplicate identical code. The state exists only to document that this connection came via proxy protocol, not to control the read dispatch. Happy to add an explicit check with a comment explaining the invariant, if that improves readability.

Bug 5 — No trusted-proxy IP whitelist (Security / Design)

As mentioned before, I personally prefer to avoid auto-detection. It's not a bug, and in my opinion no fix is necessary. The feature must be explicitly enabled, anyone who activates --stratum-proxy-protocol is aware that there should only be a their proxy in front. Whether the upstream proxy allows incoming proxy protocol or not is an infrastructure configuration, not a task for the application. We would only need a whitelist if we auto-detected the proxy protocol, which we don't.
That said, I think we could add an optional --stratum-proxy-protocol-whitelist parameter to make it idiot-proof, if someone accidentally exposes the direct port publicly without proper firewall rules while also using proxy-protocol, the ban system could be silently bypassed, and a second line of defense at the application would be in place

@SChernykh
Copy link
Copy Markdown
Owner

But to be spec-correct, we should revert to < 12 / < 36 and replace the > 36 upper bound with a buffer size guard.

Yes

Legitimate, but highly unlikely to actually occur, as you mentioned, P2P "is very low traffic"

Those checks appeared there because it did occur in the CI runs and made them fail sometimes, so this check is important.

Bug 3 - agree, no change needed
Bug 4 - yes, it works. Not intuitive, but it works, so no change needed right now
Bug 5 - no need for the white list argument, just a documentation is enough

@SChernykh SChernykh merged commit 23d3a24 into SChernykh:master Feb 22, 2026
39 of 40 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants