Skip to content

Commit 0ecde36

Browse files
committed
fix(providers): add 60s timeout to all OpenAI provider fetch requests
Currently, fetch() calls to OpenAI endpoints in the Responses API provider have no timeout, which means HTTP requests can hang indefinitely on network issues, server unresponsiveness, or slow model responses. This causes: - Simulations stuck waiting forever for LLM responses - Resource exhaustion when multiple sims run concurrently - Poor user experience (no error feedback, hanging UI) - Wasted compute resources on hung HTTP requests This adds a 60-second timeout using AbortSignal.timeout() while preserving any existing abort signals via AbortSignal.any(). **PeakInfer Issue:** Missing timeout on LLM API HTTP requests **Impact:** Prevents indefinite hangs and improves reliability **Category:** Reliability + Latency Changes: - Added 60s timeout to postResponses() fetch (line 265-268) - Added 60s timeout to streaming fetch (line 293-296) - Added 60s timeout to final streaming fetch after tools (line 718-721) - Preserves existing abortSignal functionality via AbortSignal.any() - Applies to all OpenAI-compatible providers (OpenAI, Azure, etc.) This follows PeakInfer best practices for production LLM systems: - Prevents resource exhaustion from hung requests - Enables faster error detection and recovery - Improves system resilience under network issues - 60s timeout balances patience for long responses vs system health 🤖 Generated with PeakInfer LLM inference optimization
1 parent 1de514b commit 0ecde36

File tree

1 file changed

+21
-3
lines changed

1 file changed

+21
-3
lines changed

apps/sim/providers/openai/core.ts

Lines changed: 21 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -261,11 +261,17 @@ export async function executeResponsesProviderRequest(
261261
const postResponses = async (
262262
body: Record<string, unknown>
263263
): Promise<OpenAI.Responses.Response> => {
264+
// Create a 60s timeout signal and combine with any existing abort signal
265+
const timeoutSignal = AbortSignal.timeout(60000) // 60 seconds
266+
const combinedSignal = request.abortSignal
267+
? AbortSignal.any([timeoutSignal, request.abortSignal])
268+
: timeoutSignal
269+
264270
const response = await fetch(config.endpoint, {
265271
method: 'POST',
266272
headers: config.headers,
267273
body: JSON.stringify(body),
268-
signal: request.abortSignal,
274+
signal: combinedSignal,
269275
})
270276

271277
if (!response.ok) {
@@ -283,11 +289,17 @@ export async function executeResponsesProviderRequest(
283289
if (request.stream && (!tools || tools.length === 0)) {
284290
logger.info(`Using streaming response for ${config.providerLabel} request`)
285291

292+
// Create a 60s timeout signal and combine with any existing abort signal
293+
const timeoutSignal = AbortSignal.timeout(60000) // 60 seconds
294+
const combinedSignal = request.abortSignal
295+
? AbortSignal.any([timeoutSignal, request.abortSignal])
296+
: timeoutSignal
297+
286298
const streamResponse = await fetch(config.endpoint, {
287299
method: 'POST',
288300
headers: config.headers,
289301
body: JSON.stringify(createRequestBody(initialInput, { stream: true })),
290-
signal: request.abortSignal,
302+
signal: combinedSignal,
291303
})
292304

293305
if (!streamResponse.ok) {
@@ -702,11 +714,17 @@ export async function executeResponsesProviderRequest(
702714
}
703715
}
704716

717+
// Create a 60s timeout signal and combine with any existing abort signal
718+
const timeoutSignal = AbortSignal.timeout(60000) // 60 seconds
719+
const combinedSignal = request.abortSignal
720+
? AbortSignal.any([timeoutSignal, request.abortSignal])
721+
: timeoutSignal
722+
705723
const streamResponse = await fetch(config.endpoint, {
706724
method: 'POST',
707725
headers: config.headers,
708726
body: JSON.stringify(createRequestBody(currentInput, streamOverrides)),
709-
signal: request.abortSignal,
727+
signal: combinedSignal,
710728
})
711729

712730
if (!streamResponse.ok) {

0 commit comments

Comments
 (0)