Skip to content

feat: add MiniMax provider support#3581

Open
octo-patch wants to merge 1 commit intosimstudioai:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax provider support#3581
octo-patch wants to merge 1 commit intosimstudioai:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

Summary

Add MiniMax as a new LLM provider, enabling users to access MiniMax's chat models through the platform.

Changes

  • New provider: apps/sim/providers/minimax/ — OpenAI-compatible provider using https://api.minimax.io/v1
  • Models: MiniMax-M2.5 (default) and MiniMax-M2.5-highspeed, both with 204K context window
  • Icon: Added MiniMaxIcon to components/icons.tsx
  • Registration: Added minimax to ProviderId type, provider registry, provider metadata, and model definitions
  • Temperature handling: Clamped to (0.0, 1.0] per MiniMax API constraints (zero is rejected)
  • Streaming: Full streaming support via shared createOpenAICompatibleStream utility
  • Tool calling: Full tool usage support with forced tool cycling
  • Unit tests: 10 tests covering provider metadata, request execution, temperature clamping, and token usage

Model Pricing

Model Input Output Cached Input
MiniMax-M2.5 $0.30/M tokens $1.20/M tokens $0.03/M tokens
MiniMax-M2.5-highspeed $0.60/M tokens $2.40/M tokens $0.03/M tokens

API Documentation

- Add MiniMax chat model provider with OpenAI-compatible API
- Support MiniMax-M2.5 and MiniMax-M2.5-highspeed models (204K context)
- Add MiniMaxIcon to icons component
- Register provider in types, registry, utils, and models
- Clamp temperature to (0, 1] range per MiniMax API constraints
- Add unit tests for provider metadata and request execution
@cursor
Copy link

cursor bot commented Mar 14, 2026

PR Summary

Medium Risk
Adds a new OpenAI-compatible provider implementation with streaming and tool-calling loops, which can affect request execution flow and token/cost accounting. Risk is moderate since it’s largely additive but touches core provider registry/model definitions used across requests.

Overview
Adds MiniMax as a first-class LLM provider using the OpenAI-compatible API at https://api.minimax.io/v1, including streaming support via the shared createOpenAICompatibleStream helper.

Registers minimax across provider types/registry/metadata and introduces MiniMax model definitions (default MiniMax-M2.5, plus MiniMax-M2.5-highspeed) with pricing, context window, and temperature range enforcement (clamped to (0, 1]). Also adds a MiniMaxIcon and unit tests covering basic request behavior, temperature clamping, and token usage.

Written by Cursor Bugbot for commit d5d4e40. This will update automatically on new commits. Configure here.

@vercel
Copy link

vercel bot commented Mar 14, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

1 Skipped Deployment
Project Deployment Actions Updated (UTC)
docs Skipped Skipped Mar 14, 2026 3:51pm

Request Review

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 14, 2026

Greptile Summary

This PR adds MiniMax as a new OpenAI-compatible LLM provider, following the established pattern used by Groq, DeepSeek, Cerebras, and others. The registration, icon, model definitions, and streaming plumbing are all consistent with existing providers. Two logic bugs in the tool-calling path of apps/sim/providers/minimax/index.ts need to be addressed before merging:

  • Silent error swallowing in the tool loop: The inner try/catch (lines 245–438) logs errors but does not re-throw them. Any failure during a mid-loop model call (e.g., network timeout) silently returns partial results to the executor with no error signal.
  • Orphaned tool_calls entries for unrecognised tools: When the model calls a tool name not present in request.tools, the promise resolves to null and no tool-result message is added to currentMessages. However, the assistant message with the corresponding tool_calls entry has already been pushed. The subsequent API call will fail because every tool_calls entry must be matched by a tool role message with the same tool_call_id.
  • The MiniMax-M2.5-highspeed model pricing ($0.60/M input, $2.40/M output) is 2× more expensive than the base model, which is unusual for a "highspeed" variant — the values should be verified against MiniMax's official pricing documentation.

Confidence Score: 2/5

  • Not safe to merge until the two tool-loop logic bugs are resolved — they will cause silent failures and API errors for any workflow using tool calling with MiniMax.
  • The registration, streaming, and non-tool paths are correctly implemented. However, the inner try-catch silently swallows errors in the tool iteration loop, and unrecognised tool calls produce orphaned assistant messages that will break the API conversation. Both are in the critical hot path for agentic workflows.
  • apps/sim/providers/minimax/index.ts requires the most attention — specifically the inner catch block (line 436) and the null-return path for unknown tools (line 266).

Important Files Changed

Filename Overview
apps/sim/providers/minimax/index.ts Core provider implementation with two logic bugs: silent error swallowing in the tool loop (errors are caught but not re-thrown, returning silent partial results) and orphaned tool_call messages when a tool is not found (will cause API errors on the subsequent model call).
apps/sim/providers/minimax/utils.ts Thin wrapper around the shared createOpenAICompatibleStream utility — correct and minimal.
apps/sim/providers/minimax/index.test.ts 10 unit tests covering metadata, base URL, content return, temperature clamping, system prompt, and token usage. No coverage for tool-calling paths (orphaned tool calls, error propagation, forced-tool cycling).
apps/sim/providers/models.ts Adds minimax provider definition with two models, correct context windows, and temperature caps; MiniMax-M2.5-highspeed pricing is 2× the base model — worth confirming against official docs.
apps/sim/providers/types.ts Adds 'minimax' to the ProviderId union type — straightforward and correct.
apps/sim/providers/registry.ts Registers minimaxProvider in the provider registry — correct and consistent with other provider registrations.
apps/sim/providers/utils.ts Adds minimax: buildProviderMetadata('minimax') to the client-safe provider metadata map — correct and consistent.
apps/sim/components/icons.tsx Adds MiniMaxIcon as a custom SVG icon — valid SVG structure, consistent with other provider icons in the file.

Sequence Diagram

sequenceDiagram
    participant Executor
    participant MiniMaxProvider
    participant OpenAI_Client as OpenAI Client (MiniMax base URL)
    participant ToolExecutor

    Executor->>MiniMaxProvider: executeRequest(request)
    MiniMaxProvider->>OpenAI_Client: chat.completions.create(payload)
    OpenAI_Client-->>MiniMaxProvider: response (may include tool_calls)

    alt Streaming with no tools
        MiniMaxProvider-->>Executor: StreamingExecution (early return)
    else Non-streaming or tools present
        loop Tool iteration (≤ MAX_TOOL_ITERATIONS)
            MiniMaxProvider->>ToolExecutor: executeTool(name, params) [parallel]
            ToolExecutor-->>MiniMaxProvider: tool results
            MiniMaxProvider->>OpenAI_Client: chat.completions.create(next payload)
            OpenAI_Client-->>MiniMaxProvider: next response
        end
        alt Streaming requested (post-tool)
            MiniMaxProvider->>OpenAI_Client: chat.completions.create(stream=true)
            OpenAI_Client-->>MiniMaxProvider: stream
            MiniMaxProvider-->>Executor: StreamingExecution
        else Non-streaming
            MiniMaxProvider-->>Executor: ProviderResponse
        end
    end
Loading

Last reviewed commit: d5d4e40

Comment on lines +436 to +438
} catch (error) {
logger.error('Error in MiniMax request:', { error })
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Silent error swallowing breaks error propagation

The inner catch block catches all errors that occur during the tool-calling loop — including network failures on subsequent model calls (e.g., line 387 minimax.chat.completions.create) — but only logs them and lets execution fall through. The caller receives a ProviderResponse with partial/empty toolCalls and no indication that an error occurred, making failures invisible to the executor.

This should re-throw (or wrap in ProviderError) so the outer catch can convert it to a proper ProviderError with timing data:

Suggested change
} catch (error) {
logger.error('Error in MiniMax request:', { error })
}
} catch (error) {
logger.error('Error in MiniMax request:', { error })
throw error
}

Comment on lines +263 to +267
const toolArgs = JSON.parse(toolCall.function.arguments)
const tool = request.tools?.find((t) => t.id === toolName)

if (!tool) return null

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Orphaned tool_calls message when tool is not found

When the model hallucinates a tool name (not present in request.tools), this function returns null. However, the assistant message is unconditionally pushed to currentMessages (lines 303–313) with all tool_calls entries. Any tool call returning null here will not produce a corresponding tool role message.

The MiniMax API (like other OpenAI-compatible APIs) requires every element in tool_calls to have a matching tool role message with the same tool_call_id. Sending mismatched entries will cause an API error on the very next minimax.chat.completions.create call.

A simple fix is to add an error result for unrecognised tool calls rather than returning null:

Suggested change
const toolArgs = JSON.parse(toolCall.function.arguments)
const tool = request.tools?.find((t) => t.id === toolName)
if (!tool) return null
const toolArgs = JSON.parse(toolCall.function.arguments)
const tool = request.tools?.find((t) => t.id === toolName)
if (!tool) {
const toolCallEndTime = Date.now()
return {
toolCall,
toolName,
toolParams: {},
result: {
success: false,
output: undefined,
error: `Tool "${toolName}" not found`,
},
startTime: toolCallStartTime,
endTime: toolCallEndTime,
duration: toolCallEndTime - toolCallStartTime,
}
}

Comment on lines +1214 to +1222
{
id: 'MiniMax-M2.5-highspeed',
pricing: {
input: 0.6,
cachedInput: 0.03,
output: 2.4,
updatedAt: '2025-06-01',
},
capabilities: {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Verify MiniMax-M2.5-highspeed pricing

MiniMax-M2.5-highspeed is priced at 2× the cost of the base model (input: $0.60/M vs $0.30/M). This is the opposite of what "highspeed" variants typically imply (usually a cheaper, faster version). Please double-check the MiniMax pricing page to confirm the values are not swapped before merging.

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

<title>MiniMax</title>
<rect width='120' height='120' rx='24' fill='#1A1A2E' />
<path
d='M30 80V40l15 20 15-20v40M70 40v40M80 40l10 20 10-20v40'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SVG icon path draws incomplete second "M" character

Low Severity

The third sub-path M80 40l10 20 10-20v40 in the MiniMaxIcon SVG is missing its left vertical stroke. It traces (80,40)→(90,60)→(100,40)→(100,80), rendering as a V-shape with a trailing line — not an "M". Compare with the first sub-path which correctly starts from the bottom M30 80V40l15 20 15-20v40, drawing the upward left stroke before the V-shape. The third sub-path needs to start at the bottom (e.g. M80 80V40l10 20 10-20v40) to draw a matching "M".

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant