Skip to content

release: 0.3.0#42

Open
stainless-app[bot] wants to merge 18 commits intomainfrom
release-please--branches--main--changes--next
Open

release: 0.3.0#42
stainless-app[bot] wants to merge 18 commits intomainfrom
release-please--branches--main--changes--next

Conversation

@stainless-app
Copy link
Contributor

@stainless-app stainless-app bot commented Jan 13, 2026

Automated Release PR

0.3.0 (2026-02-07)

Full Changelog: v0.2.0...v0.3.0

Features

  • client: add custom JSON encoder for extended type support (30a7195)
  • client: add support for binary request streaming (48f4cca)
  • runner: dependency-aware parallel tool execution (7e6716f)
  • runner: dependency-aware parallel tool execution (#44) (a72f70f)

Bug Fixes

  • api: add byok provider model (bf52572)
  • api: default auth server (38c637a)
  • docs: fix mcp installation instructions for remote servers (e4e3619)
  • runner: allow local tool execution in mixed MCP+local scenarios (5d0ce6d)
  • runner: inject server tool results into conversation for mixed tool calls (288b70e)
  • runner: preserve thought_signature in tool call accumulation and extraction (77e5958)
  • runner: server tool results, mixed-tool execution, thought_signature passthrough (#45) (637d9b8)
  • runner: skip early break when local tools need execution alongside MCP (ad7379b)

Chores

  • ci: add missing environment (0ec49ed)
  • ci: upgrade actions/github-script (cf53a9e)
  • internal: update actions/checkout version (c72dfca)
  • runner: strip commented-out production version and banner comments from core.py (59350e3)

This pull request is managed by Stainless's GitHub App.

The semver version number is based on included commit messages. Alternatively, you can manually set the version number in the title of this pull request.

For a better experience, it is recommended to use either rebase-merge or squash-merge when merging this pull request.

🔗 Stainless website
📚 Read the docs
🙋 Reach out for help or questions

@stainless-app
Copy link
Contributor Author

stainless-app bot commented Jan 13, 2026

🧪 Testing

To try out this version of the SDK, run:

pip install 'https://pkg.stainless.com/s/dedalus-sdk-python/bf525727fbbb537225239ebcdf88c85c4e58d05d/dedalus_labs-0.2.0-py3-none-any.whl'

Expires at: Mon, 09 Mar 2026 15:34:14 GMT
Updated at: Sat, 07 Feb 2026 15:34:14 GMT

@stainless-app stainless-app bot force-pushed the release-please--branches--main--changes--next branch from ac5c5d9 to c72a9be Compare January 16, 2026 18:31
@stainless-app stainless-app bot force-pushed the release-please--branches--main--changes--next branch from c72a9be to 00246c5 Compare January 22, 2026 03:58
*,
cast_to: Type[ResponseT],
body: Body | None = None,
content: BinaryTypes | None = None,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Iterator content silently lost on request retry

Medium Severity

When content is an Iterable[bytes] (like a generator), the iterator is consumed on the first request attempt. If a retryable error occurs, the retry logic uses model_copy which performs a shallow copy, so options.content still references the same exhausted iterator. Subsequent retry attempts send an empty request body. This is exacerbated by DEFAULT_MAX_RETRIES being increased from 0 to 2 in this same PR. Users passing generator-based content could experience silent data loss on transient failures.

Additional Locations (1)

Fix in Cursor Fix in Web

@stainless-app stainless-app bot force-pushed the release-please--branches--main--changes--next branch from 00246c5 to f4bfcf4 Compare January 23, 2026 18:15
@stainless-app stainless-app bot force-pushed the release-please--branches--main--changes--next branch from f4bfcf4 to 120f4f7 Compare January 28, 2026 16:37
@stainless-app stainless-app bot force-pushed the release-please--branches--main--changes--next branch from 120f4f7 to b02a9df Compare February 7, 2026 10:42
@cursor
Copy link

cursor bot commented Feb 7, 2026

PR Summary

Medium Risk
Touches core request/serialization and retry behavior, plus runner tool execution scheduling; regressions could affect request bodies, retries, and tool-call ordering/concurrency.

Overview
Bumps the SDK to 0.3.0 (manifest, pyproject.toml, _version.py, changelog/spec stats) and updates CI workflows (newer actions/checkout/github-script, PyPI publish job now targets production).

Adds a new OCR API surface (client.ocr.process with types/resources and docs/tests) and extends the client to send binary request bodies via a new content parameter (sync/async post/put/patch/delete), deprecating raw-bytes body while also switching JSON serialization to a custom openapi_dumps encoder that supports datetime and pydantic models.

Refactors DedalusRunner to forward a broad set of chat-completions parameters via api_kwargs, preserves thought_signature, and introduces a dependency-aware local tool scheduler that topologically orders/parallelizes local tool execution (with sequential fallback on cycles). Defaults are adjusted to retry requests 2 times with updated backoff timings, and the client now defaults as_base_url and includes X-Provider-Model header when set.

Written by Cursor Bugbot for commit a3df0b4. This will update automatically on new commits. Configure here.

@stainless-app stainless-app bot force-pushed the release-please--branches--main--changes--next branch from b02a9df to 3c98764 Compare February 7, 2026 11:13
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Bugbot Autofix is ON. A Cloud Agent has been kicked off to fix the reported issues.

[],
steps,
verbose=exec_config.verbose,
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing assistant message before tool execution in streaming paths

High Severity

Both streaming paths (_execute_streaming_async and _execute_streaming_sync) call the scheduler's execute_local_tools_async/execute_local_tools_sync without first appending an assistant message with tool_calls to the messages list. The scheduler's own docstring explicitly states the caller is responsible for this. The non-streaming paths (_execute_tool_calls at line 1204, _execute_tool_calls_sync at line 1229) correctly append {"role": "assistant", "tool_calls": ...} before calling the scheduler. The old streaming code also did this but the line was removed during the refactoring. This produces a malformed conversation (tool messages without a preceding assistant message), which will cause the API to reject subsequent requests.

Additional Locations (1)

Fix in Cursor Fix in Web

# Collect MCP tool results emitted by the server
chunk_extra = getattr(chunk, "__pydantic_extra__", None) or {}
if isinstance(chunk_extra, dict) and "mcp_tool_results" in chunk_extra:
mcp_tool_results_from_server = chunk_extra["mcp_tool_results"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Collected MCP tool results variable is never used

Low Severity

mcp_tool_results_from_server is assigned in both _execute_streaming_async and _execute_streaming_sync but is never read after assignment. The variable is dead code — the collected MCP tool results are silently discarded. Given the PR includes a fix for "inject server tool results into conversation for mixed tool calls," this may represent an incomplete implementation for the streaming paths.

Additional Locations (1)

Fix in Cursor Fix in Web

@cursor
Copy link

cursor bot commented Feb 7, 2026

Bugbot Autofix prepared fixes for 2 of the 2 bugs found in the latest run.

  • ✅ Fixed: Missing assistant message before tool execution in streaming paths
    • Added messages.append({"role": "assistant", "tool_calls": local_only}) before calling the scheduler in both _execute_streaming_async and _execute_streaming_sync to match the non-streaming behavior.
  • ✅ Fixed: Collected MCP tool results variable is never used
    • Removed the dead mcp_tool_results_from_server variable and its collection logic from both streaming paths since it was assigned but never read.

Create PR

Or push these changes by commenting:

@cursor push 9e8ecee943
Preview (9e8ecee943)
diff --git a/src/dedalus_labs/lib/runner/core.py b/src/dedalus_labs/lib/runner/core.py
--- a/src/dedalus_labs/lib/runner/core.py
+++ b/src/dedalus_labs/lib/runner/core.py
@@ -686,7 +686,6 @@
             content_chunks = 0
             tool_call_chunks = 0
             finish_reason = None
-            mcp_tool_results_from_server: list = []
             async for chunk in stream:
                 chunk_count += 1
                 if exec_config.verbose:
@@ -697,11 +696,6 @@
                         if isinstance(meta, dict) and meta.get("type") == "agent_updated":
                             print(f" [EVENT] agent_updated: agent={meta.get('agent')} model={meta.get('model')}")
 
-                # Collect MCP tool results emitted by the server
-                chunk_extra = getattr(chunk, "__pydantic_extra__", None) or {}
-                if isinstance(chunk_extra, dict) and "mcp_tool_results" in chunk_extra:
-                    mcp_tool_results_from_server = chunk_extra["mcp_tool_results"]
-
                 if hasattr(chunk, "choices") and chunk.choices:
                     choice = chunk.choices[0]
                     delta = choice.delta
@@ -776,6 +770,9 @@
 
                     from ._scheduler import execute_local_tools_async
 
+                    # Record assistant message with tool calls (OpenAI format requires this before tool messages)
+                    messages.append({"role": "assistant", "tool_calls": local_only})
+
                     await execute_local_tools_async(
                         local_only,
                         tool_handler,
@@ -972,16 +969,10 @@
             tool_call_chunks = 0
             finish_reason = None
             accumulated_content = ""
-            mcp_tool_results_from_server: list = []
 
             for chunk in stream:
                 chunk_count += 1
 
-                # Collect MCP tool results emitted by the server
-                chunk_extra = getattr(chunk, "__pydantic_extra__", None) or {}
-                if isinstance(chunk_extra, dict) and "mcp_tool_results" in chunk_extra:
-                    mcp_tool_results_from_server = chunk_extra["mcp_tool_results"]
-
                 if hasattr(chunk, "choices") and chunk.choices:
                     choice = chunk.choices[0]
                     delta = choice.delta
@@ -1065,6 +1056,9 @@
 
                     from ._scheduler import execute_local_tools_sync
 
+                    # Record assistant message with tool calls (OpenAI format requires this before tool messages)
+                    messages.append({"role": "assistant", "tool_calls": local_only})
+
                     execute_local_tools_sync(
                         local_only,
                         tool_handler,

@stainless-app stainless-app bot force-pushed the release-please--branches--main--changes--next branch from 3c98764 to b8e048e Compare February 7, 2026 15:33
@stainless-app stainless-app bot force-pushed the release-please--branches--main--changes--next branch from b8e048e to a3df0b4 Compare February 7, 2026 15:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants