-
Notifications
You must be signed in to change notification settings - Fork 2
Fix initialization error when only OPENAI_API_KEY is configured #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…re not available - Refactor openai.ts to conditionally initialize Azure OpenAI only when both AZURE_SECRET_KEY and AZURE_ENDPOINT_URL are present - Add createAzureOpenAI() factory function for lazy initialization - Update transliterate route to use direct OpenAI API when Azure is not configured - Maintain backward compatibility for existing Azure OpenAI users This allows the application to work with just OPENAI_API_KEY environment variable, making it easier for contributors and users who don't have Azure OpenAI access.
|
@rsmn is attempting to deploy a commit to the DigitalSeem's projects Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughThe changes introduce a new function for Azure OpenAI client creation and reorganize the transliteration API to support three scenarios: direct OpenAI use, Azure OpenAI with fallback for a specific style, and Azure OpenAI for all other cases. Imports and control flow are updated to accommodate these scenarios, with improved environment validation and error handling. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant API (route.ts)
participant AzureOpenAI
participant DirectOpenAI
Client->>API (route.ts): POST transliteration request
alt Azure client not configured and OpenAI API key present
API (route.ts)->>DirectOpenAI: Make request (model based on style)
DirectOpenAI-->>API (route.ts): Return result
else SHARIASOURCE style (non-reverse)
API (route.ts)->>AzureOpenAI: Try request
alt AzureOpenAI succeeds
AzureOpenAI-->>API (route.ts): Return result
else AzureOpenAI fails
API (route.ts)->>DirectOpenAI: Fallback request
DirectOpenAI-->>API (route.ts): Return result
end
else Other styles
API (route.ts)->>AzureOpenAI: Make request
AzureOpenAI-->>API (route.ts): Return result
end
API (route.ts)-->>Client: Return transliteration response
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
Note 🔌 MCP (Model Context Protocol) integration is now available in Early Access!Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🔭 Outside diff range comments (1)
app/api/transliterate/route.ts (1)
91-106: Fix (Direct fallback):max_completion_tokens➜max_tokensSame Chat Completions parameter correction as above.
const completion = await directClient.chat.completions.create({ model: "gpt-4.1", temperature: 0, messages: [ { role: "system", content: systemPrompt }, { role: "user", content: userPrompt } ], - response_format: { - "type": "text" - }, - max_completion_tokens: 2048, + // Default output is text, so response_format is unnecessary. + max_tokens: 2048, top_p: 1, frequency_penalty: 0, presence_penalty: 0 });
🧹 Nitpick comments (4)
lib/openai.ts (1)
3-17: Naming nit: Consider exporting asazureOpenAIfor clarityThe exported constant named
openaiactually refers to an Azure client (or null). Consider renaming toazureOpenAIto avoid confusion alongsidecreateDirectOpenAI.app/api/transliterate/route.ts (3)
2-2: Nit: Alias the Azure client import for clarityTo avoid confusion between “OpenAI” (direct) and Azure, consider aliasing:
-import { openai, createDirectOpenAI, createAzureOpenAI } from '@/lib/openai'; +import { openai as azureOpenAI, createDirectOpenAI, createAzureOpenAI } from '@/lib/openai';Then update usages accordingly. Purely a readability improvement.
29-36: CoerceuseDirectOpenAIto boolean to avoid accidental truthy/falsy bugs
process.env.OPENAI_API_KEYis a string | undefined; as written,useDirectOpenAIbecomes string | false | undefined. Coerce to boolean.-// Check if we should use direct OpenAI for all requests -const useDirectOpenAI = !openai && process.env.OPENAI_API_KEY; +// Check if we should use direct OpenAI for all requests +const useDirectOpenAI = !openai && !!process.env.OPENAI_API_KEY;
87-87: Useconsole.warnorconsole.errorfor failuresPrefer
console.warnfor fallbacks orconsole.errorfor actual failures. This improves log signal.- console.log('Azure OpenAI failed, falling back to OpenAI directly:', azureError); + console.warn('Azure OpenAI failed, falling back to OpenAI directly:', azureError);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
app/api/transliterate/route.ts(3 hunks)lib/openai.ts(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
app/api/transliterate/route.ts (1)
lib/openai.ts (3)
openai(19-21)createDirectOpenAI(23-30)createAzureOpenAI(3-17)
🔇 Additional comments (4)
lib/openai.ts (1)
19-21: LGTM: Conditional export prevents import-time crash without Azure credsThis directly addresses the startup error by avoiding eager client construction when only OPENAI_API_KEY is present.
app/api/transliterate/route.ts (3)
28-28: No actionable change (blank line).
53-57: LGTM: Output normalization is consistent for SHARIASOURCERemoving asterisks for italicization only for the SHARIASOURCE forward direction is clear and localized.
113-115: Sanity check: Azure-only branch when Azure is configuredThis branch assumes Azure is available; given the earlier
useDirectOpenAIgate, it won’t execute when onlyOPENAI_API_KEYis set. This matches the PR objective.If you want extra assurance, test these env combos:
- Only OPENAI_API_KEY: expect direct client path and no calls to
createAzureOpenAI().- Only Azure creds: expect Azure path.
- Both: prefer Azure (as implemented).
I can add a lightweight test harness or integration test to assert the chosen client per env configuration.
| const completion = await directClient.chat.completions.create({ | ||
| model, | ||
| temperature: 0, | ||
| messages: [ | ||
| { role: "system", content: systemPrompt }, | ||
| { role: "user", content: userPrompt } | ||
| ], | ||
| ...(style === TransliterationStyle.SHARIASOURCE && !reverse ? { | ||
| response_format: { "type": "text" }, | ||
| max_completion_tokens: 2048, | ||
| top_p: 1, | ||
| frequency_penalty: 0, | ||
| presence_penalty: 0 | ||
| } : {}) | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix: max_completion_tokens is not a valid Chat Completions parameter; use max_tokens
For the Chat Completions API, the correct parameter is max_tokens. Using max_completion_tokens may be ignored or cause errors depending on SDK/runtime.
const completion = await directClient.chat.completions.create({
model,
temperature: 0,
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: userPrompt }
],
- ...(style === TransliterationStyle.SHARIASOURCE && !reverse ? {
- response_format: { "type": "text" },
- max_completion_tokens: 2048,
+ ...(style === TransliterationStyle.SHARIASOURCE && !reverse ? {
+ // response_format is optional for Chat Completions; default is text.
+ max_tokens: 2048,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0
} : {})
});Optional: Since default output is text for Chat Completions, you can remove response_format entirely here.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const completion = await directClient.chat.completions.create({ | |
| model, | |
| temperature: 0, | |
| messages: [ | |
| { role: "system", content: systemPrompt }, | |
| { role: "user", content: userPrompt } | |
| ], | |
| ...(style === TransliterationStyle.SHARIASOURCE && !reverse ? { | |
| response_format: { "type": "text" }, | |
| max_completion_tokens: 2048, | |
| top_p: 1, | |
| frequency_penalty: 0, | |
| presence_penalty: 0 | |
| } : {}) | |
| }); | |
| const completion = await directClient.chat.completions.create({ | |
| model, | |
| temperature: 0, | |
| messages: [ | |
| { role: "system", content: systemPrompt }, | |
| { role: "user", content: userPrompt } | |
| ], | |
| ...(style === TransliterationStyle.SHARIASOURCE && !reverse ? { | |
| // response_format is optional for Chat Completions; default is text. | |
| max_tokens: 2048, | |
| top_p: 1, | |
| frequency_penalty: 0, | |
| presence_penalty: 0 | |
| } : {}) | |
| }); |
🤖 Prompt for AI Agents
In app/api/transliterate/route.ts between lines 37 and 51, replace the invalid
parameter max_completion_tokens with max_tokens in the chat completions create
call. Additionally, you can remove the response_format property entirely since
the default output for Chat Completions is text. This will ensure the API call
uses the correct parameters and avoids potential errors.
| const azureClient = openai || createAzureOpenAI(); | ||
| const completion = await azureClient.chat.completions.create({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix: max_completion_tokens should be max_tokens for Azure Chat Completions as well
Same parameter issue applies to the Azure call. Suggest removing response_format or keep only if required by your Azure deployment (it’s typically unnecessary for plain text).
- const azureClient = openai || createAzureOpenAI();
+ const azureClient = openai || createAzureOpenAI();
const completion = await azureClient.chat.completions.create({
model: process.env.AZURE_4_1_DEPLOYMENT || "snapsolve-gpt4.1",
temperature: 0,
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: userPrompt }
],
- response_format: {
- "type": "text"
- },
- max_completion_tokens: 2048,
+ // Default is text for Chat Completions; response_format can be omitted.
+ max_tokens: 2048,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0
});If you intend to leverage the newer Responses API for GPT‑4.1, we can refactor this call path to client.responses.create instead. Happy to draft that if desired.
Also applies to: 68-75
🤖 Prompt for AI Agents
In app/api/transliterate/route.ts around lines 60-61 and 68-75, the Azure Chat
Completions call incorrectly uses the parameter max_completion_tokens instead of
max_tokens. Update the parameter name to max_tokens for Azure compatibility.
Also, remove the response_format parameter unless your Azure deployment
specifically requires it, as it is generally unnecessary for plain text
responses. If you want to use the newer Responses API for GPT-4.1, consider
refactoring the call to client.responses.create, but this is optional.
| export const createAzureOpenAI = () => { | ||
| if (!process.env.AZURE_SECRET_KEY) { | ||
| throw new Error('Missing AZURE_SECRET_KEY environment variable'); | ||
| } | ||
|
|
||
| if (!process.env.AZURE_ENDPOINT_URL) { | ||
| throw new Error('Missing AZURE_ENDPOINT_URL environment variable'); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Make Azure API version configurable (+ verify validity of '2024-10-21')
Hard-coding the apiVersion can cause breakage if Azure changes requirements. Prefer an env override with a sane default. Also, please double-check that '2024-10-21' is a valid Azure OpenAI API version for your deployments.
Apply this minimal change:
return new AzureOpenAI({
apiKey: process.env.AZURE_SECRET_KEY,
endpoint: process.env.AZURE_ENDPOINT_URL,
- apiVersion: '2024-10-21',
+ apiVersion: process.env.AZURE_OPENAI_API_VERSION ?? '2024-10-21',
});If helpful, I can follow up with a small doc PR to mention AZURE_OPENAI_API_VERSION support in README.
Optionally verify the latest stable API version:
Also applies to: 12-17
🌐 Web query:
What is the latest generally available Azure OpenAI API version that supports GPT‑4.1 chat/completions or the Responses API?
💡 Result:
The latest generally available Azure OpenAI API version that supports GPT‑4.1 chat/completions and the Responses API is 2024-10-21[1][2]. This API version supersedes the previous GA version 2024-06-01 and is currently the stable release for production use in Azure AI Foundry.
Key Details:
- API Version: 2024-10-21 (GA as of July 2025)[1][2]
- Supported Models: Includes GPT-4.1 (and its variants like mini and nano)[1][9].
- Supported APIs: Both Chat Completions and Responses API endpoints are supported in this version[1][2].
- How to Use for GPT-4.1:
- For chat/completions, use
client.chat.completions.createwithapi_version="2024-10-21"and your model deployment name (e.g., "gpt-4.1", "gpt-4o")[2]. - For the Responses API, similar patterns apply using
client.responses.create[1].
- For chat/completions, use
Code Example for Chat Completions:
import os
from openai import AzureOpenAI
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-10-21", # Latest GA
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
)
response = client.chat.completions.create(
model="gpt-4.1",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the largest mammal?"}
]
)
print(response.model_dump_json(indent=2))[2]
Code Example for Responses API:
import os
from openai import AzureOpenAI
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-10-21", # Latest GA
azure_endpoint="https://YOUR-RESOURCE-NAME.openai.azure.com"
)
response = client.responses.create(
model="gpt-4.1-nano",
input="This is a test."
)
print(response.model_dump_json(indent=2))[1]
Important Notes:
- The next generation "v1" API is still in preview as of August 2025. When this API becomes GA, traffic without a specified api-version will route to the latest GA version, but right now only
"preview"is supported for v1[1]. - GPT-4.1 is available for deployment in Azure AI Foundry from April 2025, supporting large context windows and updated capabilities[9].
- For vision-enabled models, a different (preview) API version might be required (e.g., 2024-02-15-preview)[8].
Summary Table
| API Version | Status | Supports GPT-4.1 Chat/Completions/Responses API |
|---|---|---|
| 2024-10-21 | GA (production) | Yes |
| 2025-04-01-preview | Preview | Yes (newer features, not GA) |
Next-gen v1 (preview) |
Preview | Planned future GA, not yet released |
This means for stable, production Azure deployments of GPT‑4.1 with chat or Responses API, use API version 2024-10-21[1][2][9].
Citations:
- 1: https://learn.microsoft.com/en-us/azure/ai-foundry/openai/api-version-lifecycle
- 2: https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/chatgpt
- 3: https://docs.litellm.ai/docs/providers/azure/
- 4: https://learn.microsoft.com/en-us/azure/ai-foundry/openai/whats-new
- 5: https://community.openai.com/t/clarity-on-gpt-4-1-and-o4-mini-structured-output-support/1230973
- 6: https://openai.com/index/gpt-4-1/
- 7: https://learn.microsoft.com/en-us/answers/questions/5517125/azure-openai-realtime-api
- 8: https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/gpt-with-vision
- 9: https://futurework.blog/2025/04/15/gpt-4-1-aoai/
- 10: https://learn.microsoft.com/en-us/answers/questions/2244409/azure-openai-assistants-api-v1-will-be-retired-on
Parameterize Azure OpenAI API Version
Verified that 2024-10-21 is the current GA version supporting GPT-4.1 chat/completions and the Responses API. To guard against future API–version changes, let’s make it configurable via an environment variable:
• Update lib/openai.ts (around lines 12–13):
return new AzureOpenAI({
apiKey: process.env.AZURE_SECRET_KEY,
endpoint: process.env.AZURE_ENDPOINT_URL,
- apiVersion: '2024-10-21',
+ apiVersion: process.env.AZURE_OPENAI_API_VERSION ?? '2024-10-21',
});• Add validation for AZURE_OPENAI_API_VERSION alongside the existing env‐var checks if desired.
• Update README.md to document the new AZURE_OPENAI_API_VERSION option and its default.
🤖 Prompt for AI Agents
In lib/openai.ts around lines 3 to 10, the Azure OpenAI API version is hardcoded
and should be parameterized. Add a check for the environment variable
AZURE_OPENAI_API_VERSION alongside the existing checks for AZURE_SECRET_KEY and
AZURE_ENDPOINT_URL, throwing an error if it is missing. Update the code to use
this environment variable for the API version, defaulting to "2024-10-21" if not
set. Also, update README.md to document the new AZURE_OPENAI_API_VERSION
environment variable and its default value.
Description
This PR fixes a startup error that occurs when only
OPENAI_API_KEYis set in the environment variables, without Azure OpenAI credentials.Problem
The application currently throws an error on module import when
AZURE_SECRET_KEYandAZURE_ENDPOINT_URLare not set, even if the user only wantsto use the standard OpenAI API with
OPENAI_API_KEY. This prevents the application from starting at all.Error: Missing AZURE_SECRET_KEY environment variable
Solution
lib/openai.tsto only check for Azure credentials when actually creating the Azure OpenAI clientopenaiexport conditional - it'snullwhen Azure credentials are not availableopenaiisnulland fall back to using the direct OpenAI clientChanges
OPENAI_API_KEYis availableTesting
Tested with the following
.env.localconfigurations:OPENAI_API_KEYset (no more startup errors, uses OpenAI API)Impact
OPENAI_API_KEYSummary by CodeRabbit