feat: add MiniMax provider support#3581
Conversation
- Add MiniMax chat model provider with OpenAI-compatible API - Support MiniMax-M2.5 and MiniMax-M2.5-highspeed models (204K context) - Add MiniMaxIcon to icons component - Register provider in types, registry, utils, and models - Clamp temperature to (0, 1] range per MiniMax API constraints - Add unit tests for provider metadata and request execution
PR SummaryMedium Risk Overview Registers Written by Cursor Bugbot for commit d5d4e40. This will update automatically on new commits. Configure here. |
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
Greptile SummaryThis PR adds MiniMax as a new OpenAI-compatible LLM provider, following the established pattern used by Groq, DeepSeek, Cerebras, and others. The registration, icon, model definitions, and streaming plumbing are all consistent with existing providers. Two logic bugs in the tool-calling path of
Confidence Score: 2/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Executor
participant MiniMaxProvider
participant OpenAI_Client as OpenAI Client (MiniMax base URL)
participant ToolExecutor
Executor->>MiniMaxProvider: executeRequest(request)
MiniMaxProvider->>OpenAI_Client: chat.completions.create(payload)
OpenAI_Client-->>MiniMaxProvider: response (may include tool_calls)
alt Streaming with no tools
MiniMaxProvider-->>Executor: StreamingExecution (early return)
else Non-streaming or tools present
loop Tool iteration (≤ MAX_TOOL_ITERATIONS)
MiniMaxProvider->>ToolExecutor: executeTool(name, params) [parallel]
ToolExecutor-->>MiniMaxProvider: tool results
MiniMaxProvider->>OpenAI_Client: chat.completions.create(next payload)
OpenAI_Client-->>MiniMaxProvider: next response
end
alt Streaming requested (post-tool)
MiniMaxProvider->>OpenAI_Client: chat.completions.create(stream=true)
OpenAI_Client-->>MiniMaxProvider: stream
MiniMaxProvider-->>Executor: StreamingExecution
else Non-streaming
MiniMaxProvider-->>Executor: ProviderResponse
end
end
Last reviewed commit: d5d4e40 |
| } catch (error) { | ||
| logger.error('Error in MiniMax request:', { error }) | ||
| } |
There was a problem hiding this comment.
Silent error swallowing breaks error propagation
The inner catch block catches all errors that occur during the tool-calling loop — including network failures on subsequent model calls (e.g., line 387 minimax.chat.completions.create) — but only logs them and lets execution fall through. The caller receives a ProviderResponse with partial/empty toolCalls and no indication that an error occurred, making failures invisible to the executor.
This should re-throw (or wrap in ProviderError) so the outer catch can convert it to a proper ProviderError with timing data:
| } catch (error) { | |
| logger.error('Error in MiniMax request:', { error }) | |
| } | |
| } catch (error) { | |
| logger.error('Error in MiniMax request:', { error }) | |
| throw error | |
| } |
| const toolArgs = JSON.parse(toolCall.function.arguments) | ||
| const tool = request.tools?.find((t) => t.id === toolName) | ||
|
|
||
| if (!tool) return null | ||
|
|
There was a problem hiding this comment.
Orphaned tool_calls message when tool is not found
When the model hallucinates a tool name (not present in request.tools), this function returns null. However, the assistant message is unconditionally pushed to currentMessages (lines 303–313) with all tool_calls entries. Any tool call returning null here will not produce a corresponding tool role message.
The MiniMax API (like other OpenAI-compatible APIs) requires every element in tool_calls to have a matching tool role message with the same tool_call_id. Sending mismatched entries will cause an API error on the very next minimax.chat.completions.create call.
A simple fix is to add an error result for unrecognised tool calls rather than returning null:
| const toolArgs = JSON.parse(toolCall.function.arguments) | |
| const tool = request.tools?.find((t) => t.id === toolName) | |
| if (!tool) return null | |
| const toolArgs = JSON.parse(toolCall.function.arguments) | |
| const tool = request.tools?.find((t) => t.id === toolName) | |
| if (!tool) { | |
| const toolCallEndTime = Date.now() | |
| return { | |
| toolCall, | |
| toolName, | |
| toolParams: {}, | |
| result: { | |
| success: false, | |
| output: undefined, | |
| error: `Tool "${toolName}" not found`, | |
| }, | |
| startTime: toolCallStartTime, | |
| endTime: toolCallEndTime, | |
| duration: toolCallEndTime - toolCallStartTime, | |
| } | |
| } |
| { | ||
| id: 'MiniMax-M2.5-highspeed', | ||
| pricing: { | ||
| input: 0.6, | ||
| cachedInput: 0.03, | ||
| output: 2.4, | ||
| updatedAt: '2025-06-01', | ||
| }, | ||
| capabilities: { |
There was a problem hiding this comment.
Verify MiniMax-M2.5-highspeed pricing
MiniMax-M2.5-highspeed is priced at 2× the cost of the base model (input: $0.60/M vs $0.30/M). This is the opposite of what "highspeed" variants typically imply (usually a cheaper, faster version). Please double-check the MiniMax pricing page to confirm the values are not swapped before merging.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| <title>MiniMax</title> | ||
| <rect width='120' height='120' rx='24' fill='#1A1A2E' /> | ||
| <path | ||
| d='M30 80V40l15 20 15-20v40M70 40v40M80 40l10 20 10-20v40' |
There was a problem hiding this comment.
SVG icon path draws incomplete second "M" character
Low Severity
The third sub-path M80 40l10 20 10-20v40 in the MiniMaxIcon SVG is missing its left vertical stroke. It traces (80,40)→(90,60)→(100,40)→(100,80), rendering as a V-shape with a trailing line — not an "M". Compare with the first sub-path which correctly starts from the bottom M30 80V40l15 20 15-20v40, drawing the upward left stroke before the V-shape. The third sub-path needs to start at the bottom (e.g. M80 80V40l10 20 10-20v40) to draw a matching "M".


Summary
Add MiniMax as a new LLM provider, enabling users to access MiniMax's chat models through the platform.
Changes
apps/sim/providers/minimax/— OpenAI-compatible provider usinghttps://api.minimax.io/v1MiniMax-M2.5(default) andMiniMax-M2.5-highspeed, both with 204K context windowMiniMaxIcontocomponents/icons.tsxminimaxtoProviderIdtype, provider registry, provider metadata, and model definitions(0.0, 1.0]per MiniMax API constraints (zero is rejected)createOpenAICompatibleStreamutilityModel Pricing
API Documentation