Bug Description
McpAgent crashes with "Already connected to a transport" when the Durable Object wakes up from hibernation. This happens because onStart() calls server.connect(transport) but the MCP Server instance still holds a reference to the previous transport from before hibernation.
Steps to Reproduce
- Deploy any worker using
McpAgent (e.g., the cloudflare/playwright-mcp example)
- Connect an MCP client via SSE or streamable-http
- Wait for the Durable Object to hibernate (idle timeout)
- Send another request — the DO wakes up,
onStart() runs, and crashes
Error
Error: Already connected to a transport
at McpAgent.onStart (agents/dist/mcp/index.js)
Root Cause
In packages/agents/src/mcp/index.ts, the onStart() method (and the SSE/streamable-http handlers) call server.connect(transport) without first disconnecting the previous transport. After hibernation, the Server instance is restored but still thinks it's connected.
Three locations need the fix:
onStart() (~line 183)
- SSE handler in
handleMcpMessage() (~line 229)
- Streamable-HTTP handler in
handleMcpMessage() (~line 240)
Fix
Add server.close() before each server.connect() call:
const server = await this.server;
try { await server.close(); } catch (_e) { /* not connected yet on first run */ }
await server.connect(this._transport);
The try/catch is needed because on the very first connection, server.close() may throw since no transport was ever connected.
Fix Branch
A complete fix is available at: ebrainte/agents:fix/mcp-agent-hibernation-reconnect
I was unable to create a PR because this repository restricts pull requests to collaborators only.
Affected Version
agents@0.0.109 (and current main as of March 2026)
Workaround
Use patch-package to apply the fix locally. Patch file for agents@0.0.109:
--- a/node_modules/agents/dist/mcp/index.js
+++ b/node_modules/agents/dist/mcp/index.js
@@ -170,6 +170,7 @@ var McpAgent = class _McpAgent extends DurableObject {
);
await this._init(this.props);
const server = await this.server;
+ try { await server.close(); } catch (_e) { /* not connected yet */ }
if (this._transportType === "sse") {
this._transport = new McpSSETransport(() => this.getWebSocket());
await server.connect(this._transport);
@@ -228,6 +229,7 @@ var McpAgent = class _McpAgent extends DurableObject {
this._transportType = "sse";
if (!this._transport) {
this._transport = new McpSSETransport(() => this.getWebSocket());
+ try { await server.close(); } catch (_e) {}
await server.connect(this._transport);
}
return this._agent.fetch(request);
@@ -238,6 +240,7 @@ var McpAgent = class _McpAgent extends DurableObject {
(id) => this.getWebSocketForResponseID(id),
(id) => this._requestIdToConnectionId.delete(id)
);
+ try { await server.close(); } catch (_e) {}
await server.connect(this._transport);
}
await this.ctx.storage.put("transportType", "streamable-http");
Bug Description
McpAgentcrashes with"Already connected to a transport"when the Durable Object wakes up from hibernation. This happens becauseonStart()callsserver.connect(transport)but the MCPServerinstance still holds a reference to the previous transport from before hibernation.Steps to Reproduce
McpAgent(e.g., the cloudflare/playwright-mcp example)onStart()runs, and crashesError
Root Cause
In
packages/agents/src/mcp/index.ts, theonStart()method (and the SSE/streamable-http handlers) callserver.connect(transport)without first disconnecting the previous transport. After hibernation, theServerinstance is restored but still thinks it's connected.Three locations need the fix:
onStart()(~line 183)handleMcpMessage()(~line 229)handleMcpMessage()(~line 240)Fix
Add
server.close()before eachserver.connect()call:The
try/catchis needed because on the very first connection,server.close()may throw since no transport was ever connected.Fix Branch
A complete fix is available at: ebrainte/agents:fix/mcp-agent-hibernation-reconnect
I was unable to create a PR because this repository restricts pull requests to collaborators only.
Affected Version
agents@0.0.109(and currentmainas of March 2026)Workaround
Use
patch-packageto apply the fix locally. Patch file foragents@0.0.109: