Skip to content

drpc: add stream multiplexing for concurrent RPCs over a single connection#32

Closed
shubhamdhama wants to merge 2 commits intocockroachdb:mainfrom
shubhamdhama:stream-multiplexing-core-implementation
Closed

drpc: add stream multiplexing for concurrent RPCs over a single connection#32
shubhamdhama wants to merge 2 commits intocockroachdb:mainfrom
shubhamdhama:stream-multiplexing-core-implementation

Conversation

@shubhamdhama
Copy link
Copy Markdown

@shubhamdhama shubhamdhama commented Mar 1, 2026

⚠️ Read this #32 (comment)


The DRPC manager previously allowed only one active stream at a time,
enforced by a semaphore. Clients had to wait for one RPC to finish
before starting another. This commit replaces that with multiplexing,
where multiple streams share a single transport concurrently.

To remove the single-stream restriction, the semaphore and its
associated waitForPreviousStream logic are removed. In its place, a
streamRegistry tracks all active streams by ID so the reader can route
incoming packets to the right stream.

With multiple streams writing concurrently, the old per-stream
drpcwire.Writer is no longer safe. A sharedWriteBuf collects serialized
frame bytes from all streams under a short-held mutex, and a dedicated
manageWriter goroutine drains the buffer and writes to the transport.
This also naturally batches frames that accumulate during writes.

The packetBuffer previously used a single-slot design where the producer
blocked until the consumer finished. With multiplexing, the reader must
deliver packets to any stream without waiting, so packetBuffer is
reworked into an unbounded queue with sync.Pool-based buffer recycling.

The wire reader's monotonicity check previously rejected any frame with
a lower stream ID than the last seen. This is relaxed to per-stream
scope so interleaved packets from different streams are accepted.

On the server side, ServeOne now dispatches RPCs concurrently via
goroutines instead of handling them sequentially. Invoke metadata is
tracked per stream ID so interleaved metadata/invoke sequences from
different streams are correctly associated.

Also adds a --generate-adapters flag to protoc-gen-go-drpc for
optionally skipping gRPC/DRPC adapter codegen, and integration
benchmarks comparing multiplexed vs one-connection-per-stream
performance.

@shubhamdhama
Copy link
Copy Markdown
Author

shubhamdhama commented Mar 1, 2026

This comment tracks the check-list of what needs to be done before this PR is merged,

  • Review ready checklist
  • Merge ready checklist
    • Remove the alert comment from the PR description
    • Ensure the PR description is up to date

This list is still being updated.

…ction

The DRPC manager previously allowed only one active stream at a time,
enforced by a semaphore. Clients had to wait for one RPC to finish
before starting another. This commit replaces that with multiplexing,
where multiple streams share a single transport concurrently.

To remove the single-stream restriction, the semaphore and its
associated waitForPreviousStream logic are removed. In its place, a
streamRegistry tracks all active streams by ID so the reader can route
incoming packets to the right stream.

With multiple streams writing concurrently, the old per-stream
drpcwire.Writer is no longer safe. A sharedWriteBuf collects serialized
frame bytes from all streams under a short-held mutex, and a dedicated
manageWriter goroutine drains the buffer and writes to the transport.
This also naturally batches frames that accumulate during writes.

The packetBuffer previously used a single-slot design where the producer
blocked until the consumer finished. With multiplexing, the reader must
deliver packets to any stream without waiting, so packetBuffer is
reworked into an unbounded queue with sync.Pool-based buffer recycling.

The wire reader's monotonicity check previously rejected any frame with
a lower stream ID than the last seen. This is relaxed to per-stream
scope so interleaved packets from different streams are accepted.

On the server side, ServeOne now dispatches RPCs concurrently via
goroutines instead of handling them sequentially. Invoke metadata is
tracked per stream ID so interleaved metadata/invoke sequences from
different streams are correctly associated.

Also adds a `--generate-adapters` flag to protoc-gen-go-drpc for
optionally skipping gRPC/DRPC adapter codegen, and integration
benchmarks comparing multiplexed vs one-connection-per-stream
performance.
With stream multiplexing, multiple streams write concurrently to a shared
buffer. The stream's rawWriteLocked used to split large messages into
multiple frames (via SplitData) and call WriteFrame for each chunk. Each
WriteFrame acquires the shared mutex independently, so frames from
different streams can interleave in the buffer. The reader on the other
side doesn't handle interleaved frames from different streams mid-packet.
When it sees a frame from a different stream, it resets the partial
packet, silently corrupting data for messages larger than SplitSize.

The fix changes the StreamWriter interface from WriteFrame(Frame) to
WritePacket(Packet). The stream hands off the full message data in a
single call, and the writer serializes it atomically under one mutex
hold. rawWriteLocked no longer splits messages into frames, so there is
nothing to interleave.

Splitting may have been useful before multiplexing. The manageWriter
goroutine already batches all pending data from the shared buffer into a
single transport write, so splitting at the stream level adds no value.
If we ever need to limit per-write size, that belongs in the writer
implementation, not in the stream's rawWrite path.
@shubhamdhama shubhamdhama force-pushed the stream-multiplexing-core-implementation branch from d37921f to 12d9d05 Compare March 3, 2026 11:24
@shubhamdhama
Copy link
Copy Markdown
Author

I reworked this PR as #34. It's deliberately created as a separate PR and I'll close this one eventually.

@shubhamdhama
Copy link
Copy Markdown
Author

Closing in favor of #34

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant