Description
The OpenRouter provider fails with certain models (GPT-4o, GPT-4o Mini, Grok, Gemini 2.0 Flash, etc.) returning error code OPENAI_ERROR, even though the provider being used is OpenRouter, not OpenAI directly. Other models like Claude and Gemini 2.5 Pro work fine. The openrouter/auto model also works.
Steps to Reproduce
- Configure the OpenRouter provider with a valid API key
- Select a model like
gpt-4o, gpt-4o-mini, grok-2, or gemini-2.0-flash-001 from the model picker
- Send any message
Expected Behavior
The request should be routed through OpenRouter to the selected model and return a valid response.
Actual Behavior
The request fails with errors like:
{"type":"error","message":"404 No endpoints found for x-ai/grok-2.","code":"OPENAI_ERROR"}
{"type":"error","message":"400 Provider returned error","code":"OPENAI_ERROR"}
Working models: Claude 3.5 Sonnet, Gemini 2.5 Pro, openrouter/auto
Failing models: GPT-4o, GPT-4o Mini, Grok-2, Gemini 2.0 Flash 001
Root Cause Analysis
After reading the source code, I found two issues:
Issue 1: Error code says OPENAI_ERROR because the provider internally uses the OpenAI SDK
In packages/llm-sdk/src/providers/openrouter/provider.ts, the OpenRouter provider creates an OpenAI client under the hood:
const { default: OpenAI } = await import("openai");
client = new OpenAI({
apiKey,
baseURL, // pointed at openrouter.ai
defaultHeaders: headers,
});
All errors from OpenRouter get wrapped in OpenAI's error format, which is why the error code shows OPENAI_ERROR instead of something OpenRouter-specific.
Issue 2: Hardcoded OPENROUTER_MODELS map has outdated model IDs
The OPENROUTER_MODELS map in both index.ts and provider.ts contains a hardcoded list of only ~20 models. OpenRouter supports 500+ models. Models like Grok are completely missing from this map, and several existing entries have outdated IDs.
| Model in SDK |
Current OpenRouter ID |
google/gemini-pro-1.5 |
google/gemini-1.5-pro (renamed) |
google/gemini-flash-1.5 |
google/gemini-1.5-flash (renamed) |
google/gemini-2.0-flash-exp |
google/gemini-2.0-flash-001 (graduated from exp) |
| Missing entirely |
x-ai/grok-2, x-ai/grok-3 |
| Missing entirely |
google/gemini-2.5-pro |
Issue 3: supportedModels only lists hardcoded models
In index.ts, the provider exposes only the hardcoded keys as supported:
return createCallableProvider(providerFn, {
name: "openrouter",
supportedModels: Object.keys(OPENROUTER_MODELS), // Only ~20 models
getCapabilities,
});
If the UI validates against this list, any model not hardcoded will fail.
Suggested Fix
Replace the hardcoded OPENROUTER_MODELS map with a dynamic fetch from OpenRouter's models API:
// Fetch models dynamically from OpenRouter
async function fetchOpenRouterModels(
apiKey: string
): Promise<Record<string, OpenRouterModelConfig>> {
const response = await fetch("https://openrouter.ai/api/v1/models", {
headers: { Authorization: `Bearer ${apiKey}` },
});
const data = await response.json();
const models: Record<string, OpenRouterModelConfig> = {};
for (const model of data.data) {
models[model.id] = {
vision: model.architecture?.modality?.includes("image") ?? false,
tools: model.supported_parameters?.includes("tools") ?? true,
jsonMode:
model.supported_parameters?.includes("response_format") ?? true,
maxTokens: model.context_length ?? 128000,
};
}
return models;
}
This way the SDK always stays in sync with what OpenRouter actually supports, and no model ID goes stale.
As a short-term fix, the hardcoded model IDs should at least be updated to match current OpenRouter IDs and include missing popular models (Grok, Gemini 2.5, etc.).
Also the error codes from the internal OpenAI client should be caught and re-mapped to something like OPENROUTER_ERROR so users are not confused.
Code Example
import { openrouter } from "@yourgpt/llm-sdk";
import { streamText } from "@yourgpt/llm-sdk";
// This works
const result1 = await streamText({
model: openrouter("anthropic/claude-3.5-sonnet"),
prompt: "Hello!",
});
// This fails with OPENAI_ERROR
const result2 = await streamText({
model: openrouter("x-ai/grok-2"),
prompt: "Hello!",
});
Environment
- SDK Version: latest
- Framework: Next.js
- Node Version: 18+
Screenshots / Logs
{"type":"error","message":"404 No endpoints found for x-ai/grok-2.","code":"OPENAI_ERROR"}
{"type":"error","message":"400 Provider returned error","code":"OPENAI_ERROR"}
Description
The OpenRouter provider fails with certain models (GPT-4o, GPT-4o Mini, Grok, Gemini 2.0 Flash, etc.) returning error code
OPENAI_ERROR, even though the provider being used is OpenRouter, not OpenAI directly. Other models like Claude and Gemini 2.5 Pro work fine. Theopenrouter/automodel also works.Steps to Reproduce
gpt-4o,gpt-4o-mini,grok-2, orgemini-2.0-flash-001from the model pickerExpected Behavior
The request should be routed through OpenRouter to the selected model and return a valid response.
Actual Behavior
The request fails with errors like:
{"type":"error","message":"404 No endpoints found for x-ai/grok-2.","code":"OPENAI_ERROR"}{"type":"error","message":"400 Provider returned error","code":"OPENAI_ERROR"}Working models: Claude 3.5 Sonnet, Gemini 2.5 Pro, openrouter/auto
Failing models: GPT-4o, GPT-4o Mini, Grok-2, Gemini 2.0 Flash 001
Root Cause Analysis
After reading the source code, I found two issues:
Issue 1: Error code says OPENAI_ERROR because the provider internally uses the OpenAI SDK
In
packages/llm-sdk/src/providers/openrouter/provider.ts, the OpenRouter provider creates an OpenAI client under the hood:All errors from OpenRouter get wrapped in OpenAI's error format, which is why the error code shows
OPENAI_ERRORinstead of something OpenRouter-specific.Issue 2: Hardcoded OPENROUTER_MODELS map has outdated model IDs
The
OPENROUTER_MODELSmap in bothindex.tsandprovider.tscontains a hardcoded list of only ~20 models. OpenRouter supports 500+ models. Models like Grok are completely missing from this map, and several existing entries have outdated IDs.google/gemini-pro-1.5google/gemini-1.5-pro(renamed)google/gemini-flash-1.5google/gemini-1.5-flash(renamed)google/gemini-2.0-flash-expgoogle/gemini-2.0-flash-001(graduated from exp)x-ai/grok-2,x-ai/grok-3google/gemini-2.5-proIssue 3: supportedModels only lists hardcoded models
In
index.ts, the provider exposes only the hardcoded keys as supported:If the UI validates against this list, any model not hardcoded will fail.
Suggested Fix
Replace the hardcoded
OPENROUTER_MODELSmap with a dynamic fetch from OpenRouter's models API:This way the SDK always stays in sync with what OpenRouter actually supports, and no model ID goes stale.
As a short-term fix, the hardcoded model IDs should at least be updated to match current OpenRouter IDs and include missing popular models (Grok, Gemini 2.5, etc.).
Also the error codes from the internal OpenAI client should be caught and re-mapped to something like
OPENROUTER_ERRORso users are not confused.Code Example
Environment
Screenshots / Logs