fix(core): Intercept .withResponse() to preserve OpenAI stream instru…#19122
fix(core): Intercept .withResponse() to preserve OpenAI stream instru…#19122RulaKhaled wants to merge 1 commit intodevelopfrom
Conversation
Codecov Results 📊Generated by Codecov Action |
size-limit report 📦
|
node-overhead report 🧳Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.
|
| const result = client.chat.completions.create({ | ||
| model: 'gpt-4', | ||
| messages: [{ role: 'user', content: 'Test withResponse' }], | ||
| }); |
There was a problem hiding this comment.
Test doesn't cover streaming .withResponse() regression
Low Severity
The PR description states the fix is specifically for streaming calls where .withResponse() returned the original uninstrumented stream. However, the test scenario only tests non-streaming calls (no stream: true parameter). According to the review rules for fix PRs, tests should verify the specific regression being fixed. A streaming test case with .withResponse() would provide confidence that the core bug is actually fixed. Flagging this because the review rules file specifies that fix PRs should include tests for the specific regression.
dev-packages/node-integration-tests/suites/tracing/openai/scenario-with-response.mjs
Outdated
Show resolved
Hide resolved
dev-packages/node-integration-tests/suites/tracing/openai/scenario-with-response.mjs
Outdated
Show resolved
Hide resolved
|
|
||
| // Special handling for .withResponse() to preserve instrumentation | ||
| // .withResponse() returns { data: T, response: Response, request_id: string } | ||
| if (prop === 'withResponse' && typeof value === 'function') { |
There was a problem hiding this comment.
Q: we only apply special handling here for withResponse, do we not have this issue with other custom OpenAI methods (e.g. asResponse)?
There was a problem hiding this comment.
The proxy should correctly route other functions exposed, .withResponse() is different because it wraps the parsed data in an object, and we need to replace that data field with our instrumented version while preserving the response. For asResponse specifically, it returns Promise which should be handled already
| const useInstrumentedPromise = prop in Promise.prototype || prop === Symbol.toStringTag; | ||
| const source = useInstrumentedPromise ? instrumentedPromise : target; |
There was a problem hiding this comment.
m: can we add a comment here to explain what's happening here? not super obvious I think at first glance
There was a problem hiding this comment.
sure thing, we check if we need to instrumented promise that has Sentry tracing, or to use the original object for custom methods like .withResponse()
Will add a comment
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
c4734fd to
36cccca
Compare
36cccca to
129d794
Compare


When Sentry instruments the OpenAI client, we return an instrumented promise. The OpenAI SDK returns a custom promise-like object with extra methods like .withResponse(), so returning a plain Promise breaks those methods.
We added wrapPromiseWithMethods() to proxy SDK methods back onto the instrumented promise. However, for streaming calls, .withResponse() returned the original uninstrumented stream, so we fixed it by intercepting .withResponse() in the proxy:
Closes #19073