Summary
Several gaps in Azure OpenAI support prevent users from fully utilizing the latest Azure OpenAI models (o3, o4-mini, codex-mini, gpt-4.1-nano, gpt-5-pro) and cause parameter mismatches for reasoning models.
Issues Found
1. Missing Models in azureChatOpenAI model list
The following GA models available on Azure OpenAI are missing from models.json:
- o3 (GA 2025-04-16) — full reasoning model
- o3-pro (GA 2025-06-10) — high-compute reasoning
- o4-mini (GA 2025-04-16) — compact reasoning model
- codex-mini (GA 2025-05-16) — code-focused reasoning
- gpt-4.1-nano (GA 2025-04-14) — ultra-low-cost model
- gpt-5-pro (GA 2025-10-06) — high-compute GPT-5
2. Reasoning Model Detection Incomplete
Current detection in both AzureChatOpenAI.ts and ChatOpenAI.ts:
if (modelName.includes('o1') || modelName.includes('o3') || modelName.includes('gpt-5'))
Problems:
- Misses
o4-mini and codex-mini (both are reasoning models)
- False positive on
gpt-5.2-chat and gpt-5-chat-latest (non-reasoning chat variants)
3. maxCompletionTokens Conversion Missing in ChatOpenAI
AzureChatOpenAI.ts correctly converts maxTokens → maxCompletionTokens for reasoning models, but ChatOpenAI.ts does not. This causes API errors when using reasoning models (o1, o3, o4-mini, gpt-5+) with a max tokens value set, since these models reject the max_tokens parameter.
Related: #5804
4. Stale API Version Placeholder
The credential field azureOpenAIApiVersion shows 2023-06-01-preview as placeholder text. This is a deprecated version from over 2 years ago. Should show 2024-10-21 (current GA).
The documentation link also points to the deprecated cognitive-services path instead of the current ai-foundry path.
5. No Custom Model Name Input
Azure deployment names are user-defined and may not match model names. The model dropdown only allows selection from the predefined list, with no option to type a custom name.
Environment
- Flowise v3.0.13
@langchain/openai v0.6.3
openai v4.96.0
Expected Behavior
- All GA Azure OpenAI models available in the dropdown
- Reasoning model detection covers o4-mini, codex-mini, and excludes chat variants
maxCompletionTokens used consistently for all reasoning models
- Current GA API version shown as placeholder
- Users can type custom deployment names
Summary
Several gaps in Azure OpenAI support prevent users from fully utilizing the latest Azure OpenAI models (o3, o4-mini, codex-mini, gpt-4.1-nano, gpt-5-pro) and cause parameter mismatches for reasoning models.
Issues Found
1. Missing Models in
azureChatOpenAImodel listThe following GA models available on Azure OpenAI are missing from
models.json:2. Reasoning Model Detection Incomplete
Current detection in both
AzureChatOpenAI.tsandChatOpenAI.ts:Problems:
o4-miniandcodex-mini(both are reasoning models)gpt-5.2-chatandgpt-5-chat-latest(non-reasoning chat variants)3.
maxCompletionTokensConversion Missing in ChatOpenAIAzureChatOpenAI.tscorrectly convertsmaxTokens→maxCompletionTokensfor reasoning models, butChatOpenAI.tsdoes not. This causes API errors when using reasoning models (o1, o3, o4-mini, gpt-5+) with a max tokens value set, since these models reject themax_tokensparameter.Related: #5804
4. Stale API Version Placeholder
The credential field
azureOpenAIApiVersionshows2023-06-01-previewas placeholder text. This is a deprecated version from over 2 years ago. Should show2024-10-21(current GA).The documentation link also points to the deprecated
cognitive-servicespath instead of the currentai-foundrypath.5. No Custom Model Name Input
Azure deployment names are user-defined and may not match model names. The model dropdown only allows selection from the predefined list, with no option to type a custom name.
Environment
@langchain/openaiv0.6.3openaiv4.96.0Expected Behavior
maxCompletionTokensused consistently for all reasoning models