Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions build.ts
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@ const result = await Bun.build({
splitting: true,
define: getMacroDefines(),
features,
external: ['openai'],
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Check if 'openai' is in dependencies or devDependencies in package.json

echo "=== Checking openai package location ==="
jq '.dependencies // {} | has("openai")' package.json
jq '.devDependencies // {} | has("openai")' package.json

echo "=== Full dependencies section ==="
jq '.dependencies' package.json

echo "=== Check for openai in devDependencies ==="
jq '.devDependencies.openai // "not found"' package.json

Repository: claude-code-best/claude-code

Length of output: 265


🏁 Script executed:

#!/bin/bash
# Check if this is a CLI tool or library, and how openai is actually used

echo "=== Check package.json bin and main fields ==="
jq '{name, type, main, bin, exports}' package.json

echo "=== Search for openai imports/requires in source code ==="
rg "openai" --type ts --type tsx -l src/

echo "=== Check specific openai usage patterns ==="
rg "import.*openai|require.*openai" --type ts --type tsx -A 2 src/

Repository: claude-code-best/claude-code

Length of output: 506


🏁 Script executed:

#!/bin/bash
# Search for openai usage without problematic file type flags
echo "=== Files containing 'openai' ==="
rg "openai" src/ -l

echo "=== Openai usage patterns ==="
rg "openai" src/ -A 2

echo "=== Check build.ts context around line 62 ==="
sed -n '55,70p' build.ts

Repository: claude-code-best/claude-code

Length of output: 26206


Move openai from devDependencies to dependencies.

Marking openai as external correctly prevents Bun from bundling it, which resolves the private class field initialization issue. However, openai is currently in devDependencies and is required at runtime by the bundled CLI.

When users install this package via npm, openai will not be installed (devDependencies are excluded), causing runtime errors when they use the OpenAI or Grok provider:

  • src/services/api/openai/client.ts dynamically imports openai
  • src/services/api/grok/client.ts uses openai for the Grok provider
  • Both are invoked at runtime when the user selects these providers

Move openai to dependencies to ensure it's installed alongside the CLI.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@build.ts` at line 62, Move the openai package from devDependencies to
dependencies in package.json so it is installed at runtime; update
package.json's "dependencies" to include the same openai version currently in
"devDependencies" and remove it from "devDependencies" so dynamic imports in
src/services/api/openai/client.ts and usage in src/services/api/grok/client.ts
resolve at runtime and the bundled CLI can require/openai without missing-module
errors.

})

if (!result.success) {
Expand Down
6 changes: 6 additions & 0 deletions src/commands/login/login.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,9 @@ import {
resetAutoModeGateCheck,
resetBypassPermissionsCheck,
} from '../../utils/permissions/bypassPermissionsKillswitch.js'
import { applyConfigEnvironmentVariables } from '../../utils/managedEnv.js'
import { resetModelStrings } from '../../utils/model/modelStrings.js'
import { resetSettingsCache } from '../../utils/settings/settingsCache.js'
import { resetUserCache } from '../../utils/user.js'

export async function call(
Expand All @@ -40,6 +43,9 @@ export async function call(
// Reset cost state when switching accounts
resetCostState()
// Refresh remotely managed settings after login (non-blocking)
resetSettingsCache()
applyConfigEnvironmentVariables()
resetModelStrings()
void refreshRemoteManagedSettings()
// Refresh policy limits after login (non-blocking)
void refreshPolicyLimits()
Expand Down
9 changes: 5 additions & 4 deletions src/services/api/grok/client.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
import OpenAI from 'openai'
import type OpenAI from 'openai'
import { getProxyFetchOptions } from 'src/utils/proxy.js'

/**
Expand All @@ -12,17 +12,18 @@ const DEFAULT_BASE_URL = 'https://api.x.ai/v1'

let cachedClient: OpenAI | null = null

export function getGrokClient(options?: {
export async function getGrokClient(options?: {
maxRetries?: number
fetchOverride?: typeof fetch
source?: string
}): OpenAI {
}): Promise<OpenAI> {
Comment on lines +15 to +19
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Find all usages of getGrokClient to verify they're awaited

echo "=== All getGrokClient calls in test files ==="
rg -n "getGrokClient\(\)" --type ts -g '*test*' -g '*spec*'

echo "=== All getGrokClient calls in source files ==="
rg -n "getGrokClient\(" --type ts -g '!*test*' -g '!*spec*'

Repository: claude-code-best/claude-code

Length of output: 837


🏁 Script executed:

cat -n src/services/api/grok/__tests__/client.test.ts | head -60

Repository: claude-code-best/claude-code

Length of output: 1659


🏁 Script executed:

cat -n src/services/api/grok/client.ts | head -40

Repository: claude-code-best/claude-code

Length of output: 1526


Tests need to be updated to handle the async signature.

The async conversion of getGrokClient() is correct for resolving the TDZ issue, but the test file calls it synchronously without await. All 6 calls in src/services/api/grok/__tests__/client.test.ts (lines 19, 27, 32, 33, 38, 41) return Promise<OpenAI> instead of the client instance, causing assertions like client.baseURL to fail.

Update the tests to be async functions and await all getGrokClient() calls.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/grok/client.ts` around lines 15 - 19, Update the tests that
call getGrokClient so they await its async result: make the test functions async
and add await before each getGrokClient() invocation in the client.test.ts tests
(all places where client is assigned from getGrokClient), then use the resolved
client instance for assertions like client.baseURL; ensure all six calls that
currently return Promise<OpenAI> are awaited so assertions operate on the actual
OpenAI instance.

if (cachedClient) return cachedClient

const { default: OpenAIClass } = await import('openai')
const apiKey = process.env.GROK_API_KEY || process.env.XAI_API_KEY || ''
const baseURL = process.env.GROK_BASE_URL || DEFAULT_BASE_URL

const client = new OpenAI({
const client = new OpenAIClass({
apiKey,
baseURL,
maxRetries: options?.maxRetries ?? 0,
Expand Down
5 changes: 3 additions & 2 deletions src/services/api/grok/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ export async function* queryModelGrok(
const openaiTools = anthropicToolsToOpenAI(standardTools)
const openaiToolChoice = anthropicToolChoiceToOpenAI(options.toolChoice)

const client = getGrokClient({
const client = await getGrokClient({
maxRetries: 0,
fetchOverride: options.fetchOverride as typeof fetch | undefined,
source: options.querySource,
Expand Down Expand Up @@ -187,7 +187,8 @@ export async function* queryModelGrok(
}
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error)
logForDebugging(`[Grok] Error: ${errorMessage}`, { level: 'error' })
const stack = error instanceof Error ? `\n${error.stack}` : ''
logForDebugging(`[Grok] Error: ${errorMessage}${stack}`, { level: 'error' })
yield createAssistantAPIErrorMessage({
content: `API Error: ${errorMessage}`,
apiError: 'api_error',
Expand Down
44 changes: 24 additions & 20 deletions src/services/api/grok/modelMapping.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,27 +5,31 @@
* or override the entire mapping via GROK_MODEL_MAP env var (JSON string):
* GROK_MODEL_MAP='{"opus":"grok-4","sonnet":"grok-3","haiku":"grok-3-mini-fast"}'
*/
const DEFAULT_MODEL_MAP: Record<string, string> = {
'claude-sonnet-4-20250514': 'grok-3-mini-fast',
'claude-sonnet-4-5-20250929': 'grok-3-mini-fast',
'claude-sonnet-4-6': 'grok-3-mini-fast',
'claude-opus-4-20250514': 'grok-4.20-reasoning',
'claude-opus-4-1-20250805': 'grok-4.20-reasoning',
'claude-opus-4-5-20251101': 'grok-4.20-reasoning',
'claude-opus-4-6': 'grok-4.20-reasoning',
'claude-haiku-4-5-20251001': 'grok-3-mini-fast',
'claude-3-5-haiku-20241022': 'grok-3-mini-fast',
'claude-3-7-sonnet-20250219': 'grok-3-mini-fast',
'claude-3-5-sonnet-20241022': 'grok-3-mini-fast',
function getDefaultModelMap(): Record<string, string> {
return {
'claude-sonnet-4-20250514': 'grok-3-mini-fast',
'claude-sonnet-4-5-20250929': 'grok-3-mini-fast',
'claude-sonnet-4-6': 'grok-3-mini-fast',
'claude-opus-4-20250514': 'grok-4.20-reasoning',
'claude-opus-4-1-20250805': 'grok-4.20-reasoning',
'claude-opus-4-5-20251101': 'grok-4.20-reasoning',
'claude-opus-4-6': 'grok-4.20-reasoning',
'claude-haiku-4-5-20251001': 'grok-3-mini-fast',
'claude-3-5-haiku-20241022': 'grok-3-mini-fast',
'claude-3-7-sonnet-20250219': 'grok-3-mini-fast',
'claude-3-5-sonnet-20241022': 'grok-3-mini-fast',
}
}

/**
* Family-level mapping defaults (used by GROK_MODEL_MAP).
*/
const DEFAULT_FAMILY_MAP: Record<string, string> = {
opus: 'grok-4.20-reasoning',
sonnet: 'grok-3-mini-fast',
haiku: 'grok-3-mini-fast',
function getDefaultFamilyMap(): Record<string, string> {
return {
opus: 'grok-4.20-reasoning',
sonnet: 'grok-3-mini-fast',
haiku: 'grok-3-mini-fast',
}
}

function getModelFamily(model: string): 'haiku' | 'sonnet' | 'opus' | null {
Expand Down Expand Up @@ -93,13 +97,13 @@ export function resolveGrokModel(anthropicModel: string): string {
}

// 5. Exact model name lookup
if (DEFAULT_MODEL_MAP[cleanModel]) {
return DEFAULT_MODEL_MAP[cleanModel]
if (getDefaultModelMap()[cleanModel]) {
return getDefaultModelMap()[cleanModel]
}

// 6. Family-level default
if (family && DEFAULT_FAMILY_MAP[family]) {
return DEFAULT_FAMILY_MAP[family]
if (family && getDefaultFamilyMap()[family]) {
return getDefaultFamilyMap()[family]
}

// 7. Pass through
Expand Down
9 changes: 5 additions & 4 deletions src/services/api/openai/client.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
import OpenAI from 'openai'
import type OpenAI from 'openai'
import { getProxyFetchOptions } from 'src/utils/proxy.js'
import { isEnvTruthy } from 'src/utils/envUtils.js'

Expand All @@ -13,17 +13,18 @@ import { isEnvTruthy } from 'src/utils/envUtils.js'

let cachedClient: OpenAI | null = null

export function getOpenAIClient(options?: {
export async function getOpenAIClient(options?: {
maxRetries?: number
fetchOverride?: typeof fetch
source?: string
}): OpenAI {
}): Promise<OpenAI> {
if (cachedClient) return cachedClient

const { default: OpenAIClass } = await import('openai')
Comment on lines +16 to +23
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify current guard state and call-path usage.
rg -nP --type=ts -C3 '\bcachedClient\b|\bcachedClientPromise\b|getOpenAIClient\s*\(' \
  src/services/api/openai/client.ts src/services/api/openai/index.ts

Repository: claude-code-best/claude-code

Length of output: 2165


🏁 Script executed:

#!/bin/bash
# Get the full getOpenAIClient function and surrounding context
cat -n src/services/api/openai/client.ts | head -50

Repository: claude-code-best/claude-code

Length of output: 1994


🏁 Script executed:

#!/bin/bash
# Check if there are any tests for the client
find . -type f -name "*test*" -o -name "*spec*" | grep -i "openai.*client" | head -20

Repository: claude-code-best/claude-code

Length of output: 54


🏁 Script executed:

#!/bin/bash
# Check how getOpenAIClient is used across the codebase
rg -l "getOpenAIClient" --type=ts

Repository: claude-code-best/claude-code

Length of output: 197


🏁 Script executed:

#!/bin/bash
# Check the test file for concurrent/race condition tests
cat -n src/services/api/openai/__tests__/queryModelOpenAI.test.ts | head -100

Repository: claude-code-best/claude-code

Length of output: 4866


🏁 Script executed:

#!/bin/bash
# Check context around line 293 in index.ts to understand fetchOverride usage
sed -n '280,310p' src/services/api/openai/index.ts

Repository: claude-code-best/claude-code

Length of output: 1422


🏁 Script executed:

#!/bin/bash
# Check if there are other concurrent calls to getOpenAIClient
rg -B2 -A2 "getOpenAIClient" src/services/api/openai/index.ts | head -50

Repository: claude-code-best/claude-code

Length of output: 450


🏁 Script executed:

#!/bin/bash
# Verify the Promise type is correctly used and check for any TypeScript compatibility
rg -A5 "Promise\<OpenAI\>" src/services/api/openai/client.ts

Repository: claude-code-best/claude-code

Length of output: 162


🏁 Script executed:

#!/bin/bash
# Check if there are any concurrency tests or Promise-based tests in the test file
grep -n "Promise\|concurrent\|race\|parallel" src/services/api/openai/__tests__/queryModelOpenAI.test.ts

Repository: claude-code-best/claude-code

Length of output: 54


🏁 Script executed:

#!/bin/bash
# Check for any Promise types in the file
grep -n "Promise" src/services/api/openai/client.ts

Repository: claude-code-best/claude-code

Length of output: 96


🏁 Script executed:

#!/bin/bash
# Verify the fix would work by checking the buildClient arrow function pattern is valid TypeScript
# Look for similar async arrow function patterns in the codebase
rg "const .+ = async \(\)" --type=ts | head -5

Repository: claude-code-best/claude-code

Length of output: 645


Add in-flight promise guard to prevent duplicate client construction during concurrent async initialization.

The current implementation checks cachedClient at line 21, but the await import('openai') at line 23 creates a window where concurrent calls bypass the cache check and each constructs a separate OpenAI instance. This violates the caching guarantee.

Implement in-flight promise deduplication: track a cachedClientPromise and return it if a build is already in progress, ensuring concurrent calls share the same initialization rather than duplicating work.

🔧 Suggested fix
 let cachedClient: OpenAI | null = null
+let cachedClientPromise: Promise<OpenAI> | null = null

 export async function getOpenAIClient(options?: {
   maxRetries?: number
   fetchOverride?: typeof fetch
   source?: string
 }): Promise<OpenAI> {
-  if (cachedClient) return cachedClient
-
-  const { default: OpenAIClass } = await import('openai')
-  const apiKey = process.env.OPENAI_API_KEY || ''
-  const baseURL = process.env.OPENAI_BASE_URL
-
-  const client = new OpenAIClass({
-    apiKey,
-    ...(baseURL && { baseURL }),
-    maxRetries: options?.maxRetries ?? 0,
-    timeout: parseInt(process.env.API_TIMEOUT_MS || String(600 * 1000), 10),
-    dangerouslyAllowBrowser: true,
-    ...(process.env.OPENAI_ORG_ID && { organization: process.env.OPENAI_ORG_ID }),
-    ...(process.env.OPENAI_PROJECT_ID && { project: process.env.OPENAI_PROJECT_ID }),
-    fetchOptions: getProxyFetchOptions({ forAnthropicAPI: false }),
-    ...(options?.fetchOverride && { fetch: options.fetchOverride }),
-  })
-
-  if (!options?.fetchOverride) {
-    cachedClient = client
-  }
-
-  return client
+  if (!options?.fetchOverride) {
+    if (cachedClient) return cachedClient
+    if (cachedClientPromise) return cachedClientPromise
+  }
+
+  const buildClient = async (): Promise<OpenAI> => {
+    const { default: OpenAIClass } = await import('openai')
+    const apiKey = process.env.OPENAI_API_KEY || ''
+    const baseURL = process.env.OPENAI_BASE_URL
+
+    return new OpenAIClass({
+      apiKey,
+      ...(baseURL && { baseURL }),
+      maxRetries: options?.maxRetries ?? 0,
+      timeout: parseInt(process.env.API_TIMEOUT_MS || String(600 * 1000), 10),
+      dangerouslyAllowBrowser: true,
+      ...(process.env.OPENAI_ORG_ID && { organization: process.env.OPENAI_ORG_ID }),
+      ...(process.env.OPENAI_PROJECT_ID && { project: process.env.OPENAI_PROJECT_ID }),
+      fetchOptions: getProxyFetchOptions({ forAnthropicAPI: false }),
+      ...(options?.fetchOverride && { fetch: options.fetchOverride }),
+    })
+  }
+
+  if (options?.fetchOverride) return buildClient()
+
+  cachedClientPromise = buildClient()
+  try {
+    cachedClient = await cachedClientPromise
+    return cachedClient
+  } finally {
+    cachedClientPromise = null
+  }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/openai/client.ts` around lines 16 - 23, The getOpenAIClient
function can race on concurrent calls because it checks cachedClient before an
await and may construct multiple OpenAI instances; add an in-flight promise
guard by introducing a module-scoped cachedClientPromise: when getOpenAIClient
is called, if cachedClient exists return it, else if cachedClientPromise exists
return await cachedClientPromise; otherwise set cachedClientPromise to an async
initializer that imports OpenAI (the OpenAIClass), constructs the OpenAI client,
assigns it to cachedClient, then clears cachedClientPromise on success/failure,
and finally return the created client; update getOpenAIClient to use
cachedClientPromise so concurrent calls share the same initialization and
reference the same OpenAI instance.

const apiKey = process.env.OPENAI_API_KEY || ''
const baseURL = process.env.OPENAI_BASE_URL

const client = new OpenAI({
const client = new OpenAIClass({
apiKey,
...(baseURL && { baseURL }),
maxRetries: options?.maxRetries ?? 0,
Expand Down
5 changes: 3 additions & 2 deletions src/services/api/openai/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ export async function* queryModelOpenAI(
const maxTokens = options.maxOutputTokensOverride ?? upperLimit

// 11. Get client
const client = getOpenAIClient({
const client = await getOpenAIClient({
maxRetries: 0,
fetchOverride: options.fetchOverride as unknown as typeof fetch,
source: options.querySource,
Expand Down Expand Up @@ -432,7 +432,8 @@ export async function* queryModelOpenAI(
}
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error)
logForDebugging(`[OpenAI] Error: ${errorMessage}`, { level: 'error' })
const stack = error instanceof Error ? `\n${error.stack}` : ''
logForDebugging(`[OpenAI] Error: ${errorMessage}${stack}`, { level: 'error' })
yield createAssistantAPIErrorMessage({
content: `API Error: ${errorMessage}`,
apiError: 'api_error',
Expand Down
28 changes: 15 additions & 13 deletions src/services/api/openai/modelMapping.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,20 @@
* Default mapping from Anthropic model names to OpenAI model names.
* Used only when ANTHROPIC_DEFAULT_*_MODEL env vars are not set.
*/
const DEFAULT_MODEL_MAP: Record<string, string> = {
'claude-sonnet-4-20250514': 'gpt-4o',
'claude-sonnet-4-5-20250929': 'gpt-4o',
'claude-sonnet-4-6': 'gpt-4o',
'claude-opus-4-20250514': 'o3',
'claude-opus-4-1-20250805': 'o3',
'claude-opus-4-5-20251101': 'o3',
'claude-opus-4-6': 'o3',
'claude-haiku-4-5-20251001': 'gpt-4o-mini',
'claude-3-5-haiku-20241022': 'gpt-4o-mini',
'claude-3-7-sonnet-20250219': 'gpt-4o',
'claude-3-5-sonnet-20241022': 'gpt-4o',
function getDefaultModelMap(): Record<string, string> {
return {
'claude-sonnet-4-20250514': 'gpt-4o',
'claude-sonnet-4-5-20250929': 'gpt-4o',
'claude-sonnet-4-6': 'gpt-4o',
'claude-opus-4-20250514': 'o3',
'claude-opus-4-1-20250805': 'o3',
'claude-opus-4-5-20251101': 'o3',
'claude-opus-4-6': 'o3',
'claude-haiku-4-5-20251001': 'gpt-4o-mini',
'claude-3-5-haiku-20241022': 'gpt-4o-mini',
'claude-3-7-sonnet-20250219': 'gpt-4o',
'claude-3-5-sonnet-20241022': 'gpt-4o',
}
}

/**
Expand Down Expand Up @@ -59,5 +61,5 @@ export function resolveOpenAIModel(anthropicModel: string): string {
if (anthropicOverride) return anthropicOverride
}

return DEFAULT_MODEL_MAP[cleanModel] ?? cleanModel
return getDefaultModelMap()?.[cleanModel] ?? cleanModel
}
8 changes: 7 additions & 1 deletion src/utils/model/modelStrings.ts
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,13 @@ export function getModelStrings(): ModelStrings {
}
return applyModelOverrides(ms)
}

/**
* Reset the modelStrings cache so it re-initializes with the current provider on next access.
* Call this after switching providers (e.g. after /login).
*/
export function resetModelStrings(): void {
setModelStringsState(null as unknown as ModelStrings)
}
/**
* Ensure model strings are fully initialized.
* For Bedrock users, this waits for the profile fetch to complete.
Expand Down