Conversation
|
🧪 Testing To try out this version of the SDK, run: Expires at: Mon, 09 Mar 2026 15:34:14 GMT |
ac5c5d9 to
c72a9be
Compare
c72a9be to
00246c5
Compare
| *, | ||
| cast_to: Type[ResponseT], | ||
| body: Body | None = None, | ||
| content: BinaryTypes | None = None, |
There was a problem hiding this comment.
Iterator content silently lost on request retry
Medium Severity
When content is an Iterable[bytes] (like a generator), the iterator is consumed on the first request attempt. If a retryable error occurs, the retry logic uses model_copy which performs a shallow copy, so options.content still references the same exhausted iterator. Subsequent retry attempts send an empty request body. This is exacerbated by DEFAULT_MAX_RETRIES being increased from 0 to 2 in this same PR. Users passing generator-based content could experience silent data loss on transient failures.
Additional Locations (1)
00246c5 to
f4bfcf4
Compare
f4bfcf4 to
120f4f7
Compare
120f4f7 to
b02a9df
Compare
PR SummaryMedium Risk Overview Adds a new Refactors Written by Cursor Bugbot for commit a3df0b4. This will update automatically on new commits. Configure here. |
b02a9df to
3c98764
Compare
| [], | ||
| steps, | ||
| verbose=exec_config.verbose, | ||
| ) |
There was a problem hiding this comment.
Missing assistant message before tool execution in streaming paths
High Severity
Both streaming paths (_execute_streaming_async and _execute_streaming_sync) call the scheduler's execute_local_tools_async/execute_local_tools_sync without first appending an assistant message with tool_calls to the messages list. The scheduler's own docstring explicitly states the caller is responsible for this. The non-streaming paths (_execute_tool_calls at line 1204, _execute_tool_calls_sync at line 1229) correctly append {"role": "assistant", "tool_calls": ...} before calling the scheduler. The old streaming code also did this but the line was removed during the refactoring. This produces a malformed conversation (tool messages without a preceding assistant message), which will cause the API to reject subsequent requests.
Additional Locations (1)
| # Collect MCP tool results emitted by the server | ||
| chunk_extra = getattr(chunk, "__pydantic_extra__", None) or {} | ||
| if isinstance(chunk_extra, dict) and "mcp_tool_results" in chunk_extra: | ||
| mcp_tool_results_from_server = chunk_extra["mcp_tool_results"] |
There was a problem hiding this comment.
Collected MCP tool results variable is never used
Low Severity
mcp_tool_results_from_server is assigned in both _execute_streaming_async and _execute_streaming_sync but is never read after assignment. The variable is dead code — the collected MCP tool results are silently discarded. Given the PR includes a fix for "inject server tool results into conversation for mixed tool calls," this may represent an incomplete implementation for the streaming paths.
Additional Locations (1)
|
Bugbot Autofix prepared fixes for 2 of the 2 bugs found in the latest run.
Or push these changes by commenting: Preview (9e8ecee943)diff --git a/src/dedalus_labs/lib/runner/core.py b/src/dedalus_labs/lib/runner/core.py
--- a/src/dedalus_labs/lib/runner/core.py
+++ b/src/dedalus_labs/lib/runner/core.py
@@ -686,7 +686,6 @@
content_chunks = 0
tool_call_chunks = 0
finish_reason = None
- mcp_tool_results_from_server: list = []
async for chunk in stream:
chunk_count += 1
if exec_config.verbose:
@@ -697,11 +696,6 @@
if isinstance(meta, dict) and meta.get("type") == "agent_updated":
print(f" [EVENT] agent_updated: agent={meta.get('agent')} model={meta.get('model')}")
- # Collect MCP tool results emitted by the server
- chunk_extra = getattr(chunk, "__pydantic_extra__", None) or {}
- if isinstance(chunk_extra, dict) and "mcp_tool_results" in chunk_extra:
- mcp_tool_results_from_server = chunk_extra["mcp_tool_results"]
-
if hasattr(chunk, "choices") and chunk.choices:
choice = chunk.choices[0]
delta = choice.delta
@@ -776,6 +770,9 @@
from ._scheduler import execute_local_tools_async
+ # Record assistant message with tool calls (OpenAI format requires this before tool messages)
+ messages.append({"role": "assistant", "tool_calls": local_only})
+
await execute_local_tools_async(
local_only,
tool_handler,
@@ -972,16 +969,10 @@
tool_call_chunks = 0
finish_reason = None
accumulated_content = ""
- mcp_tool_results_from_server: list = []
for chunk in stream:
chunk_count += 1
- # Collect MCP tool results emitted by the server
- chunk_extra = getattr(chunk, "__pydantic_extra__", None) or {}
- if isinstance(chunk_extra, dict) and "mcp_tool_results" in chunk_extra:
- mcp_tool_results_from_server = chunk_extra["mcp_tool_results"]
-
if hasattr(chunk, "choices") and chunk.choices:
choice = chunk.choices[0]
delta = choice.delta
@@ -1065,6 +1056,9 @@
from ._scheduler import execute_local_tools_sync
+ # Record assistant message with tool calls (OpenAI format requires this before tool messages)
+ messages.append({"role": "assistant", "tool_calls": local_only})
+
execute_local_tools_sync(
local_only,
tool_handler, |
3c98764 to
b8e048e
Compare
b8e048e to
a3df0b4
Compare



Automated Release PR
0.3.0 (2026-02-07)
Full Changelog: v0.2.0...v0.3.0
Features
Bug Fixes
Chores
actions/github-script(cf53a9e)actions/checkoutversion (c72dfca)This pull request is managed by Stainless's GitHub App.
The semver version number is based on included commit messages. Alternatively, you can manually set the version number in the title of this pull request.
For a better experience, it is recommended to use either rebase-merge or squash-merge when merging this pull request.
🔗 Stainless website
📚 Read the docs
🙋 Reach out for help or questions