Skip to content

fix(providers): support Azure OpenAI chat completions#404

Open
Yong-yuan-X wants to merge 2 commits into
rohitg00:mainfrom
Yong-yuan-X:fix/374-azure-openai
Open

fix(providers): support Azure OpenAI chat completions#404
Yong-yuan-X wants to merge 2 commits into
rohitg00:mainfrom
Yong-yuan-X:fix/374-azure-openai

Conversation

@Yong-yuan-X
Copy link
Copy Markdown

@Yong-yuan-X Yong-yuan-X commented May 15, 2026

Fixes #374

Summary

  • Add an OpenAI-compatible LLM provider for compression and summarization.
  • Detect Azure-shaped OPENAI_BASE_URL values and call /chat/completions without adding /v1.
  • Add api-version for Azure OpenAI requests, defaulting to 2024-10-21.
  • Use api-key auth for Azure key-based requests.
  • Keep non-Azure OpenAI-compatible endpoints using Authorization: Bearer.
  • Avoid duplicating /v1 or /chat/completions when users provide a base URL that already includes those path segments.
  • Document AZURE_OPENAI_API_VERSION in README and .env.example.

Verification

  • npm.cmd test -- openai-provider.test.ts
  • npx.cmd tsdown

Summary by CodeRabbit

Release Notes

  • New Features

    • Added OpenAI provider support for LLM operations.
    • Added Azure OpenAI deployment support with automatic configuration.
    • Expanded compatibility to include OpenAI-compatible providers (DeepSeek, SiliconFlow, vLLM, LM Studio, Ollama-compatible proxies).
  • Documentation

    • Updated configuration documentation with OpenAI provider options.
    • Added clear examples for OpenAI API key and model setup.

Review Change Stack

Signed-off-by: Yong-yuan-X <2463436064@qq.com>
Signed-off-by: Yong-yuan-X <2463436064@qq.com>
@vercel
Copy link
Copy Markdown

vercel Bot commented May 15, 2026

@Yong-yuan-X is attempting to deploy a commit to the rohitg00's projects Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 15, 2026

📝 Walkthrough

Walkthrough

This PR adds OpenAI and Azure OpenAI as supported LLM providers. The OpenAIProvider class implements MemoryProvider with automatic Azure detection, appropriate request headers (Bearer or api-key auth), and query parameter handling. Configuration detection and factory wiring enable OPENAI_API_KEY to trigger provider activation. Environment variables and documentation are updated to guide users.

Changes

OpenAI Provider Implementation

Layer / File(s) Summary
Provider type contract
src/types.ts
ProviderType union now includes the "openai" literal alongside existing providers.
OpenAIProvider class
src/providers/openai.ts
New exported class implementing MemoryProvider with constructor storing API key, model, max tokens, and resolved base URL. Azure detection via domain/path pattern matching. URL builder handles non-Azure /v1/chat/completions and Azure deployment URLs with api-version query parameter (defaulting to 2024-10-21). Header builder switches between Bearer (OpenAI) and api-key (Azure) authentication. Body builder includes model only for non-Azure. Shared call() method executes POST request, validates response status with detailed error messages, parses JSON, and extracts first choice's message content.
Configuration detection and factory wiring
src/config.ts, src/providers/index.ts, src/functions/summarize.ts
detectProvider() checks OPENAI_API_KEY before other keys and returns openai config with OPENAI_MODEL (default: gpt-4o-mini) and optional OPENAI_BASE_URL. detectLlmProviderKind() includes OPENAI_API_KEY in llm detection. VALID_PROVIDERS expanded to permit "openai" in fallback configuration. Error messages updated to list OPENAI_API_KEY with other recognized provider keys. createBaseProvider() adds "openai" case constructing OpenAIProvider from env and config.
Test coverage
test/openai-provider.test.ts
Vitest suite validates OpenAI default behavior (Bearer auth, /v1/chat/completions, model in body), Azure behavior (api-key auth, deployment URL, api-version parameter, no model), base URL normalization (avoid duplicating /v1 or /chat/completions), and Azure api-version default fallback.
Configuration documentation
.env.example, README.md
.env.example adds commented OpenAI configuration: OPENAI_API_KEY, OPENAI_MODEL, OPENAI_BASE_URL, AZURE_OPENAI_API_VERSION. README adds "OpenAI-compatible" row in LLM providers table (OpenAI, Azure, DeepSeek, SiliconFlow, vLLM, LM Studio, Ollama-compatible) and foregrounds OpenAI env variables in the .env template example, removing a duplicate section.

Sequence Diagram

sequenceDiagram
  participant Caller
  participant OpenAIProvider
  participant URLBuilder
  participant HeaderBuilder
  participant BodyBuilder
  participant OpenAI_API
  Caller->>OpenAIProvider: compress(systemPrompt, userPrompt)
  activate OpenAIProvider
  OpenAIProvider->>URLBuilder: buildRequestUrl()
  activate URLBuilder
  alt isAzure detected
    URLBuilder-->>OpenAIProvider: {deploymentURL}/chat/completions?api-version=2024-10-21
  else OpenAI compatible
    URLBuilder-->>OpenAIProvider: https://api.openai.com/v1/chat/completions
  end
  deactivate URLBuilder
  OpenAIProvider->>HeaderBuilder: buildHeaders()
  activate HeaderBuilder
  alt isAzure detected
    HeaderBuilder-->>OpenAIProvider: {api-key: apiKey}
  else OpenAI compatible
    HeaderBuilder-->>OpenAIProvider: {Authorization: Bearer apiKey}
  end
  deactivate HeaderBuilder
  OpenAIProvider->>BodyBuilder: buildRequestBody()
  activate BodyBuilder
  alt isAzure detected
    BodyBuilder-->>OpenAIProvider: {max_tokens, messages}
  else OpenAI compatible
    BodyBuilder-->>OpenAIProvider: {model, max_tokens, messages}
  end
  deactivate BodyBuilder
  OpenAIProvider->>OpenAI_API: POST with URL, headers, body
  activate OpenAI_API
  OpenAI_API-->>OpenAIProvider: {choices:[{message:{content:string}}]}
  deactivate OpenAI_API
  OpenAIProvider-->>Caller: extracted content string
  deactivate OpenAIProvider
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

  • rohitg00/agentmemory#383: Sync check for .env.example documentation of environment variables; this PR's env-var additions will be validated against the provided .env.example updates.

Poem

🐰 OpenAI arrives with Azure in tow,
Headers and URLs shift—Bearer or api-key to know,
Deployments detected, versions append,
One provider now bridges where API paths bend.
Tests bloom to ensure both paths dance just right!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change—adding Azure OpenAI chat completions support to the OpenAI provider.
Linked Issues check ✅ Passed All requirements from issue #374 are met: Azure endpoint detection, api-version query parameter handling, api-key header authentication, and non-Azure provider compatibility preserved.
Out of Scope Changes check ✅ Passed All changes are scoped to OpenAI provider support and documentation; no unrelated modifications detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 ESLint

If the error stems from missing dependencies, add them to the package.json file. For unrecoverable errors (e.g., due to private dependencies), disable the tool in the CodeRabbit configuration.

ESLint skipped: no ESLint configuration detected in root package.json. To enable, add eslint to devDependencies.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/providers/openai.ts`:
- Around line 97-101: The outbound fetch call that creates `response` (calling
this.buildRequestUrl(), this.buildHeaders(), this.buildBody()) needs an
AbortController-based timeout so stalled upstream/network requests don't block;
create an AbortController, pass controller.signal to fetch, start a setTimeout
that calls controller.abort() after a configured timeout, and ensure you clear
that timeout in a finally block so it doesn't leak; update the fetch invocation
to include the signal and handle the abort error path as appropriate in the
surrounding method.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d6f97af1-2ab4-4489-b7e5-076a9a2f3ac5

📥 Commits

Reviewing files that changed from the base of the PR and between 372c6a6 and 08de94d.

📒 Files selected for processing (8)
  • .env.example
  • README.md
  • src/config.ts
  • src/functions/summarize.ts
  • src/providers/index.ts
  • src/providers/openai.ts
  • src/types.ts
  • test/openai-provider.test.ts

Comment thread src/providers/openai.ts
Comment on lines +97 to +101
const response = await fetch(this.buildRequestUrl(), {
method: "POST",
headers: this.buildHeaders(),
body: this.buildBody(systemPrompt, userPrompt),
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Add a timeout to outbound OpenAI requests.

fetch at Line 97 has no timeout, so stalled upstream/network connections can block summarize/compress indefinitely. Add an abort timeout and clear it in finally.

Suggested fix
   private async call(
     systemPrompt: string,
     userPrompt: string,
   ): Promise<string> {
-    const response = await fetch(this.buildRequestUrl(), {
-      method: "POST",
-      headers: this.buildHeaders(),
-      body: this.buildBody(systemPrompt, userPrompt),
-    });
+    const controller = new AbortController();
+    const timeout = setTimeout(() => controller.abort(), 30_000);
+    let response: Response;
+    try {
+      response = await fetch(this.buildRequestUrl(), {
+        method: "POST",
+        headers: this.buildHeaders(),
+        body: this.buildBody(systemPrompt, userPrompt),
+        signal: controller.signal,
+      });
+    } finally {
+      clearTimeout(timeout);
+    }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/providers/openai.ts` around lines 97 - 101, The outbound fetch call that
creates `response` (calling this.buildRequestUrl(), this.buildHeaders(),
this.buildBody()) needs an AbortController-based timeout so stalled
upstream/network requests don't block; create an AbortController, pass
controller.signal to fetch, start a setTimeout that calls controller.abort()
after a configured timeout, and ensure you clear that timeout in a finally block
so it doesn't leak; update the fetch invocation to include the signal and handle
the abort error path as appropriate in the surrounding method.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

OpenAI provider: detect Azure-shaped OPENAI_BASE_URL and append api-version

1 participant