fix(providers): support Azure OpenAI chat completions#404
Conversation
Signed-off-by: Yong-yuan-X <2463436064@qq.com>
Signed-off-by: Yong-yuan-X <2463436064@qq.com>
|
@Yong-yuan-X is attempting to deploy a commit to the rohitg00's projects Team on Vercel. A member of the Team first needs to authorize it. |
📝 WalkthroughWalkthroughThis PR adds OpenAI and Azure OpenAI as supported LLM providers. The OpenAIProvider class implements MemoryProvider with automatic Azure detection, appropriate request headers (Bearer or api-key auth), and query parameter handling. Configuration detection and factory wiring enable OPENAI_API_KEY to trigger provider activation. Environment variables and documentation are updated to guide users. ChangesOpenAI Provider Implementation
Sequence DiagramsequenceDiagram
participant Caller
participant OpenAIProvider
participant URLBuilder
participant HeaderBuilder
participant BodyBuilder
participant OpenAI_API
Caller->>OpenAIProvider: compress(systemPrompt, userPrompt)
activate OpenAIProvider
OpenAIProvider->>URLBuilder: buildRequestUrl()
activate URLBuilder
alt isAzure detected
URLBuilder-->>OpenAIProvider: {deploymentURL}/chat/completions?api-version=2024-10-21
else OpenAI compatible
URLBuilder-->>OpenAIProvider: https://api.openai.com/v1/chat/completions
end
deactivate URLBuilder
OpenAIProvider->>HeaderBuilder: buildHeaders()
activate HeaderBuilder
alt isAzure detected
HeaderBuilder-->>OpenAIProvider: {api-key: apiKey}
else OpenAI compatible
HeaderBuilder-->>OpenAIProvider: {Authorization: Bearer apiKey}
end
deactivate HeaderBuilder
OpenAIProvider->>BodyBuilder: buildRequestBody()
activate BodyBuilder
alt isAzure detected
BodyBuilder-->>OpenAIProvider: {max_tokens, messages}
else OpenAI compatible
BodyBuilder-->>OpenAIProvider: {model, max_tokens, messages}
end
deactivate BodyBuilder
OpenAIProvider->>OpenAI_API: POST with URL, headers, body
activate OpenAI_API
OpenAI_API-->>OpenAIProvider: {choices:[{message:{content:string}}]}
deactivate OpenAI_API
OpenAIProvider-->>Caller: extracted content string
deactivate OpenAIProvider
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 ESLint
ESLint skipped: no ESLint configuration detected in root package.json. To enable, add Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/providers/openai.ts`:
- Around line 97-101: The outbound fetch call that creates `response` (calling
this.buildRequestUrl(), this.buildHeaders(), this.buildBody()) needs an
AbortController-based timeout so stalled upstream/network requests don't block;
create an AbortController, pass controller.signal to fetch, start a setTimeout
that calls controller.abort() after a configured timeout, and ensure you clear
that timeout in a finally block so it doesn't leak; update the fetch invocation
to include the signal and handle the abort error path as appropriate in the
surrounding method.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d6f97af1-2ab4-4489-b7e5-076a9a2f3ac5
📒 Files selected for processing (8)
.env.exampleREADME.mdsrc/config.tssrc/functions/summarize.tssrc/providers/index.tssrc/providers/openai.tssrc/types.tstest/openai-provider.test.ts
| const response = await fetch(this.buildRequestUrl(), { | ||
| method: "POST", | ||
| headers: this.buildHeaders(), | ||
| body: this.buildBody(systemPrompt, userPrompt), | ||
| }); |
There was a problem hiding this comment.
Add a timeout to outbound OpenAI requests.
fetch at Line 97 has no timeout, so stalled upstream/network connections can block summarize/compress indefinitely. Add an abort timeout and clear it in finally.
Suggested fix
private async call(
systemPrompt: string,
userPrompt: string,
): Promise<string> {
- const response = await fetch(this.buildRequestUrl(), {
- method: "POST",
- headers: this.buildHeaders(),
- body: this.buildBody(systemPrompt, userPrompt),
- });
+ const controller = new AbortController();
+ const timeout = setTimeout(() => controller.abort(), 30_000);
+ let response: Response;
+ try {
+ response = await fetch(this.buildRequestUrl(), {
+ method: "POST",
+ headers: this.buildHeaders(),
+ body: this.buildBody(systemPrompt, userPrompt),
+ signal: controller.signal,
+ });
+ } finally {
+ clearTimeout(timeout);
+ }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/providers/openai.ts` around lines 97 - 101, The outbound fetch call that
creates `response` (calling this.buildRequestUrl(), this.buildHeaders(),
this.buildBody()) needs an AbortController-based timeout so stalled
upstream/network requests don't block; create an AbortController, pass
controller.signal to fetch, start a setTimeout that calls controller.abort()
after a configured timeout, and ensure you clear that timeout in a finally block
so it doesn't leak; update the fetch invocation to include the signal and handle
the abort error path as appropriate in the surrounding method.
Fixes #374
Summary
OPENAI_BASE_URLvalues and call/chat/completionswithout adding/v1.api-versionfor Azure OpenAI requests, defaulting to2024-10-21.api-keyauth for Azure key-based requests.Authorization: Bearer./v1or/chat/completionswhen users provide a base URL that already includes those path segments.AZURE_OPENAI_API_VERSIONin README and.env.example.Verification
npm.cmd test -- openai-provider.test.tsnpx.cmd tsdownSummary by CodeRabbit
Release Notes
New Features
Documentation