drpc: add stream multiplexing for concurrent RPCs over a single connection#32
Closed
shubhamdhama wants to merge 2 commits intocockroachdb:mainfrom
Closed
Conversation
Author
|
This comment tracks the check-list of what needs to be done before this PR is merged,
This list is still being updated. |
…ction The DRPC manager previously allowed only one active stream at a time, enforced by a semaphore. Clients had to wait for one RPC to finish before starting another. This commit replaces that with multiplexing, where multiple streams share a single transport concurrently. To remove the single-stream restriction, the semaphore and its associated waitForPreviousStream logic are removed. In its place, a streamRegistry tracks all active streams by ID so the reader can route incoming packets to the right stream. With multiple streams writing concurrently, the old per-stream drpcwire.Writer is no longer safe. A sharedWriteBuf collects serialized frame bytes from all streams under a short-held mutex, and a dedicated manageWriter goroutine drains the buffer and writes to the transport. This also naturally batches frames that accumulate during writes. The packetBuffer previously used a single-slot design where the producer blocked until the consumer finished. With multiplexing, the reader must deliver packets to any stream without waiting, so packetBuffer is reworked into an unbounded queue with sync.Pool-based buffer recycling. The wire reader's monotonicity check previously rejected any frame with a lower stream ID than the last seen. This is relaxed to per-stream scope so interleaved packets from different streams are accepted. On the server side, ServeOne now dispatches RPCs concurrently via goroutines instead of handling them sequentially. Invoke metadata is tracked per stream ID so interleaved metadata/invoke sequences from different streams are correctly associated. Also adds a `--generate-adapters` flag to protoc-gen-go-drpc for optionally skipping gRPC/DRPC adapter codegen, and integration benchmarks comparing multiplexed vs one-connection-per-stream performance.
With stream multiplexing, multiple streams write concurrently to a shared buffer. The stream's rawWriteLocked used to split large messages into multiple frames (via SplitData) and call WriteFrame for each chunk. Each WriteFrame acquires the shared mutex independently, so frames from different streams can interleave in the buffer. The reader on the other side doesn't handle interleaved frames from different streams mid-packet. When it sees a frame from a different stream, it resets the partial packet, silently corrupting data for messages larger than SplitSize. The fix changes the StreamWriter interface from WriteFrame(Frame) to WritePacket(Packet). The stream hands off the full message data in a single call, and the writer serializes it atomically under one mutex hold. rawWriteLocked no longer splits messages into frames, so there is nothing to interleave. Splitting may have been useful before multiplexing. The manageWriter goroutine already batches all pending data from the shared buffer into a single transport write, so splitting at the stream level adds no value. If we ever need to limit per-write size, that belongs in the writer implementation, not in the stream's rawWrite path.
d37921f to
12d9d05
Compare
Author
|
I reworked this PR as #34. It's deliberately created as a separate PR and I'll close this one eventually. |
Author
|
Closing in favor of #34 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The DRPC manager previously allowed only one active stream at a time,
enforced by a semaphore. Clients had to wait for one RPC to finish
before starting another. This commit replaces that with multiplexing,
where multiple streams share a single transport concurrently.
To remove the single-stream restriction, the semaphore and its
associated waitForPreviousStream logic are removed. In its place, a
streamRegistry tracks all active streams by ID so the reader can route
incoming packets to the right stream.
With multiple streams writing concurrently, the old per-stream
drpcwire.Writer is no longer safe. A sharedWriteBuf collects serialized
frame bytes from all streams under a short-held mutex, and a dedicated
manageWriter goroutine drains the buffer and writes to the transport.
This also naturally batches frames that accumulate during writes.
The packetBuffer previously used a single-slot design where the producer
blocked until the consumer finished. With multiplexing, the reader must
deliver packets to any stream without waiting, so packetBuffer is
reworked into an unbounded queue with sync.Pool-based buffer recycling.
The wire reader's monotonicity check previously rejected any frame with
a lower stream ID than the last seen. This is relaxed to per-stream
scope so interleaved packets from different streams are accepted.
On the server side, ServeOne now dispatches RPCs concurrently via
goroutines instead of handling them sequentially. Invoke metadata is
tracked per stream ID so interleaved metadata/invoke sequences from
different streams are correctly associated.
Also adds a
--generate-adaptersflag to protoc-gen-go-drpc foroptionally skipping gRPC/DRPC adapter codegen, and integration
benchmarks comparing multiplexed vs one-connection-per-stream
performance.