Skip to content

Conversation

@rsmn
Copy link

@rsmn rsmn commented Aug 9, 2025

Description

This PR fixes a startup error that occurs when only OPENAI_API_KEY is set in the environment variables, without Azure OpenAI credentials.

Problem

The application currently throws an error on module import when AZURE_SECRET_KEY and AZURE_ENDPOINT_URL are not set, even if the user only wants
to use the standard OpenAI API with OPENAI_API_KEY. This prevents the application from starting at all.

Error: Missing AZURE_SECRET_KEY environment variable

Solution

  • Modified lib/openai.ts to only check for Azure credentials when actually creating the Azure OpenAI client
  • Made the openai export conditional - it's null when Azure credentials are not available
  • Updated the API route to handle the case where openai is null and fall back to using the direct OpenAI client

Changes

  • lib/openai.ts: Moved Azure credential validation inside a factory function and made the export conditional
  • app/api/transliterate/route.ts: Added handling for when only OPENAI_API_KEY is available

Testing

Tested with the following .env.local configurations:

  • Only OPENAI_API_KEY set (no more startup errors, uses OpenAI API)
  • All Azure credentials set (works as before)
  • Both Azure and OpenAI credentials set (works as before, uses Azure)

Impact

  • Fixes startup error for users who only have OPENAI_API_KEY
  • No breaking changes for existing Azure OpenAI users
  • Makes the application more flexible in terms of API provider configuration=

Summary by CodeRabbit

  • New Features
    • Enhanced transliteration requests to support both direct OpenAI and Azure OpenAI, with automatic fallback to direct OpenAI for specific styles if Azure is unavailable.
  • Refactor
    • Improved the initialization and configuration of OpenAI clients, ensuring more robust environment validation and error handling.

…re not available

  - Refactor openai.ts to conditionally initialize Azure OpenAI only when both AZURE_SECRET_KEY and AZURE_ENDPOINT_URL are present
  - Add createAzureOpenAI() factory function for lazy initialization
  - Update transliterate route to use direct OpenAI API when Azure is not configured
  - Maintain backward compatibility for existing Azure OpenAI users

  This allows the application to work with just OPENAI_API_KEY environment variable, making it easier for contributors and users who don't have Azure
  OpenAI access.
@vercel
Copy link

vercel bot commented Aug 9, 2025

@rsmn is attempting to deploy a commit to the DigitalSeem's projects Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link

coderabbitai bot commented Aug 9, 2025

Walkthrough

The changes introduce a new function for Azure OpenAI client creation and reorganize the transliteration API to support three scenarios: direct OpenAI use, Azure OpenAI with fallback for a specific style, and Azure OpenAI for all other cases. Imports and control flow are updated to accommodate these scenarios, with improved environment validation and error handling.

Changes

Cohort / File(s) Change Summary
Transliteration API Control Flow & Client Selection
app/api/transliterate/route.ts
Refactored transliteration logic to support three OpenAI usage modes: direct OpenAI, Azure OpenAI with fallback for a specific style, and Azure OpenAI for all other cases. Updated imports, control flow, and error handling.
Azure OpenAI Client Initialization
lib/openai.ts
Introduced createAzureOpenAI() function for encapsulated Azure client creation with environment validation. Changed openai export to be conditionally initialized using this function, or set to null if environment variables are missing. Exported the new function. No changes to other exports.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant API (route.ts)
    participant AzureOpenAI
    participant DirectOpenAI

    Client->>API (route.ts): POST transliteration request
    alt Azure client not configured and OpenAI API key present
        API (route.ts)->>DirectOpenAI: Make request (model based on style)
        DirectOpenAI-->>API (route.ts): Return result
    else SHARIASOURCE style (non-reverse)
        API (route.ts)->>AzureOpenAI: Try request
        alt AzureOpenAI succeeds
            AzureOpenAI-->>API (route.ts): Return result
        else AzureOpenAI fails
            API (route.ts)->>DirectOpenAI: Fallback request
            DirectOpenAI-->>API (route.ts): Return result
        end
    else Other styles
        API (route.ts)->>AzureOpenAI: Make request
        AzureOpenAI-->>API (route.ts): Return result
    end
    API (route.ts)-->>Client: Return transliteration response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

In the warren where code does hop,
Azure and OpenAI now swap and swap.
With checks for keys and fallback grace,
Transliterations run at a rabbit’s pace.
If one path fails, another will try—
Clever bunnies never say die!
🐇✨

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🔭 Outside diff range comments (1)
app/api/transliterate/route.ts (1)

91-106: Fix (Direct fallback): max_completion_tokensmax_tokens

Same Chat Completions parameter correction as above.

        const completion = await directClient.chat.completions.create({
          model: "gpt-4.1",
          temperature: 0,
          messages: [
            { role: "system", content: systemPrompt },
            { role: "user", content: userPrompt }
          ],
-          response_format: {
-            "type": "text"
-          },
-          max_completion_tokens: 2048,
+          // Default output is text, so response_format is unnecessary.
+          max_tokens: 2048,
           top_p: 1,
           frequency_penalty: 0,
           presence_penalty: 0
        });
🧹 Nitpick comments (4)
lib/openai.ts (1)

3-17: Naming nit: Consider exporting as azureOpenAI for clarity

The exported constant named openai actually refers to an Azure client (or null). Consider renaming to azureOpenAI to avoid confusion alongside createDirectOpenAI.

app/api/transliterate/route.ts (3)

2-2: Nit: Alias the Azure client import for clarity

To avoid confusion between “OpenAI” (direct) and Azure, consider aliasing:

-import { openai, createDirectOpenAI, createAzureOpenAI } from '@/lib/openai';
+import { openai as azureOpenAI, createDirectOpenAI, createAzureOpenAI } from '@/lib/openai';

Then update usages accordingly. Purely a readability improvement.


29-36: Coerce useDirectOpenAI to boolean to avoid accidental truthy/falsy bugs

process.env.OPENAI_API_KEY is a string | undefined; as written, useDirectOpenAI becomes string | false | undefined. Coerce to boolean.

-// Check if we should use direct OpenAI for all requests
-const useDirectOpenAI = !openai && process.env.OPENAI_API_KEY;
+// Check if we should use direct OpenAI for all requests
+const useDirectOpenAI = !openai && !!process.env.OPENAI_API_KEY;

87-87: Use console.warn or console.error for failures

Prefer console.warn for fallbacks or console.error for actual failures. This improves log signal.

-        console.log('Azure OpenAI failed, falling back to OpenAI directly:', azureError);
+        console.warn('Azure OpenAI failed, falling back to OpenAI directly:', azureError);
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 76351a8 and 8fb5830.

📒 Files selected for processing (2)
  • app/api/transliterate/route.ts (3 hunks)
  • lib/openai.ts (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
app/api/transliterate/route.ts (1)
lib/openai.ts (3)
  • openai (19-21)
  • createDirectOpenAI (23-30)
  • createAzureOpenAI (3-17)
🔇 Additional comments (4)
lib/openai.ts (1)

19-21: LGTM: Conditional export prevents import-time crash without Azure creds

This directly addresses the startup error by avoiding eager client construction when only OPENAI_API_KEY is present.

app/api/transliterate/route.ts (3)

28-28: No actionable change (blank line).


53-57: LGTM: Output normalization is consistent for SHARIASOURCE

Removing asterisks for italicization only for the SHARIASOURCE forward direction is clear and localized.


113-115: Sanity check: Azure-only branch when Azure is configured

This branch assumes Azure is available; given the earlier useDirectOpenAI gate, it won’t execute when only OPENAI_API_KEY is set. This matches the PR objective.

If you want extra assurance, test these env combos:

  • Only OPENAI_API_KEY: expect direct client path and no calls to createAzureOpenAI().
  • Only Azure creds: expect Azure path.
  • Both: prefer Azure (as implemented).

I can add a lightweight test harness or integration test to assert the chosen client per env configuration.

Comment on lines +37 to +51
const completion = await directClient.chat.completions.create({
model,
temperature: 0,
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: userPrompt }
],
...(style === TransliterationStyle.SHARIASOURCE && !reverse ? {
response_format: { "type": "text" },
max_completion_tokens: 2048,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0
} : {})
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix: max_completion_tokens is not a valid Chat Completions parameter; use max_tokens

For the Chat Completions API, the correct parameter is max_tokens. Using max_completion_tokens may be ignored or cause errors depending on SDK/runtime.

       const completion = await directClient.chat.completions.create({
         model,
         temperature: 0,
         messages: [
           { role: "system", content: systemPrompt },
           { role: "user", content: userPrompt }
         ],
-        ...(style === TransliterationStyle.SHARIASOURCE && !reverse ? {
-          response_format: { "type": "text" },
-          max_completion_tokens: 2048,
+        ...(style === TransliterationStyle.SHARIASOURCE && !reverse ? {
+          // response_format is optional for Chat Completions; default is text.
+          max_tokens: 2048,
           top_p: 1,
           frequency_penalty: 0,
           presence_penalty: 0
         } : {})
       });

Optional: Since default output is text for Chat Completions, you can remove response_format entirely here.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const completion = await directClient.chat.completions.create({
model,
temperature: 0,
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: userPrompt }
],
...(style === TransliterationStyle.SHARIASOURCE && !reverse ? {
response_format: { "type": "text" },
max_completion_tokens: 2048,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0
} : {})
});
const completion = await directClient.chat.completions.create({
model,
temperature: 0,
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: userPrompt }
],
...(style === TransliterationStyle.SHARIASOURCE && !reverse ? {
// response_format is optional for Chat Completions; default is text.
max_tokens: 2048,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0
} : {})
});
🤖 Prompt for AI Agents
In app/api/transliterate/route.ts between lines 37 and 51, replace the invalid
parameter max_completion_tokens with max_tokens in the chat completions create
call. Additionally, you can remove the response_format property entirely since
the default output for Chat Completions is text. This will ensure the API call
uses the correct parameters and avoids potential errors.

Comment on lines +60 to +61
const azureClient = openai || createAzureOpenAI();
const completion = await azureClient.chat.completions.create({
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix: max_completion_tokens should be max_tokens for Azure Chat Completions as well

Same parameter issue applies to the Azure call. Suggest removing response_format or keep only if required by your Azure deployment (it’s typically unnecessary for plain text).

-        const azureClient = openai || createAzureOpenAI();
+        const azureClient = openai || createAzureOpenAI();
         const completion = await azureClient.chat.completions.create({
           model: process.env.AZURE_4_1_DEPLOYMENT || "snapsolve-gpt4.1",
           temperature: 0,
           messages: [
             { role: "system", content: systemPrompt },
             { role: "user", content: userPrompt }
           ],
-          response_format: {
-            "type": "text"
-          },
-          max_completion_tokens: 2048,
+          // Default is text for Chat Completions; response_format can be omitted.
+          max_tokens: 2048,
           top_p: 1,
           frequency_penalty: 0,
           presence_penalty: 0
         });

If you intend to leverage the newer Responses API for GPT‑4.1, we can refactor this call path to client.responses.create instead. Happy to draft that if desired.

Also applies to: 68-75

🤖 Prompt for AI Agents
In app/api/transliterate/route.ts around lines 60-61 and 68-75, the Azure Chat
Completions call incorrectly uses the parameter max_completion_tokens instead of
max_tokens. Update the parameter name to max_tokens for Azure compatibility.
Also, remove the response_format parameter unless your Azure deployment
specifically requires it, as it is generally unnecessary for plain text
responses. If you want to use the newer Responses API for GPT-4.1, consider
refactoring the call to client.responses.create, but this is optional.

Comment on lines +3 to +10
export const createAzureOpenAI = () => {
if (!process.env.AZURE_SECRET_KEY) {
throw new Error('Missing AZURE_SECRET_KEY environment variable');
}

if (!process.env.AZURE_ENDPOINT_URL) {
throw new Error('Missing AZURE_ENDPOINT_URL environment variable');
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Make Azure API version configurable (+ verify validity of '2024-10-21')

Hard-coding the apiVersion can cause breakage if Azure changes requirements. Prefer an env override with a sane default. Also, please double-check that '2024-10-21' is a valid Azure OpenAI API version for your deployments.

Apply this minimal change:

   return new AzureOpenAI({
     apiKey: process.env.AZURE_SECRET_KEY,
     endpoint: process.env.AZURE_ENDPOINT_URL,
-    apiVersion: '2024-10-21',
+    apiVersion: process.env.AZURE_OPENAI_API_VERSION ?? '2024-10-21',
   });

If helpful, I can follow up with a small doc PR to mention AZURE_OPENAI_API_VERSION support in README.

Optionally verify the latest stable API version:

Also applies to: 12-17


🌐 Web query:

What is the latest generally available Azure OpenAI API version that supports GPT‑4.1 chat/completions or the Responses API?

💡 Result:

The latest generally available Azure OpenAI API version that supports GPT‑4.1 chat/completions and the Responses API is 2024-10-21[1][2]. This API version supersedes the previous GA version 2024-06-01 and is currently the stable release for production use in Azure AI Foundry.

Key Details:

  • API Version: 2024-10-21 (GA as of July 2025)[1][2]
  • Supported Models: Includes GPT-4.1 (and its variants like mini and nano)[1][9].
  • Supported APIs: Both Chat Completions and Responses API endpoints are supported in this version[1][2].
  • How to Use for GPT-4.1:
    • For chat/completions, use client.chat.completions.create with api_version="2024-10-21" and your model deployment name (e.g., "gpt-4.1", "gpt-4o")[2].
    • For the Responses API, similar patterns apply using client.responses.create[1].

Code Example for Chat Completions:

import os
from openai import AzureOpenAI

client = AzureOpenAI(
    api_key=os.getenv("AZURE_OPENAI_API_KEY"),
    api_version="2024-10-21",  # Latest GA
    azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
)

response = client.chat.completions.create(
    model="gpt-4.1",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is the largest mammal?"}
    ]
)
print(response.model_dump_json(indent=2))

[2]

Code Example for Responses API:

import os
from openai import AzureOpenAI

client = AzureOpenAI(
    api_key=os.getenv("AZURE_OPENAI_API_KEY"),
    api_version="2024-10-21",  # Latest GA
    azure_endpoint="https://YOUR-RESOURCE-NAME.openai.azure.com"
)

response = client.responses.create(
    model="gpt-4.1-nano",
    input="This is a test."
)
print(response.model_dump_json(indent=2))

[1]

Important Notes:

  • The next generation "v1" API is still in preview as of August 2025. When this API becomes GA, traffic without a specified api-version will route to the latest GA version, but right now only "preview" is supported for v1[1].
  • GPT-4.1 is available for deployment in Azure AI Foundry from April 2025, supporting large context windows and updated capabilities[9].
  • For vision-enabled models, a different (preview) API version might be required (e.g., 2024-02-15-preview)[8].

Summary Table

API Version Status Supports GPT-4.1 Chat/Completions/Responses API
2024-10-21 GA (production) Yes
2025-04-01-preview Preview Yes (newer features, not GA)
Next-gen v1 (preview) Preview Planned future GA, not yet released

This means for stable, production Azure deployments of GPT‑4.1 with chat or Responses API, use API version 2024-10-21[1][2][9].

Citations:


Parameterize Azure OpenAI API Version

Verified that 2024-10-21 is the current GA version supporting GPT-4.1 chat/completions and the Responses API. To guard against future API–version changes, let’s make it configurable via an environment variable:

• Update lib/openai.ts (around lines 12–13):

   return new AzureOpenAI({
     apiKey: process.env.AZURE_SECRET_KEY,
     endpoint: process.env.AZURE_ENDPOINT_URL,
-      apiVersion: '2024-10-21',
+      apiVersion: process.env.AZURE_OPENAI_API_VERSION ?? '2024-10-21',
   });

• Add validation for AZURE_OPENAI_API_VERSION alongside the existing env‐var checks if desired.
• Update README.md to document the new AZURE_OPENAI_API_VERSION option and its default.

🤖 Prompt for AI Agents
In lib/openai.ts around lines 3 to 10, the Azure OpenAI API version is hardcoded
and should be parameterized. Add a check for the environment variable
AZURE_OPENAI_API_VERSION alongside the existing checks for AZURE_SECRET_KEY and
AZURE_ENDPOINT_URL, throwing an error if it is missing. Update the code to use
this environment variable for the API version, defaulting to "2024-10-21" if not
set. Also, update README.md to document the new AZURE_OPENAI_API_VERSION
environment variable and its default value.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant