Skip to content

feat: add MiniMax as first-class LLM provider#3866

Open
octo-patch wants to merge 1 commit intoIBM:mainfrom
octo-patch:feat/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#3866
octo-patch wants to merge 1 commit intoIBM:mainfrom
octo-patch:feat/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax AI as the 13th supported LLM provider in ContextForge Gateway. MiniMax provides an OpenAI-compatible API at api.minimax.io, offering models like MiniMax-M2.7 and MiniMax-M2.7-highspeed with up to 1M token context windows, chat completion, streaming, and function calling support.

Changes

  • mcpgateway/llm_schemas.py: Add MINIMAX = "minimax" to LLMProviderTypeEnum
  • mcpgateway/db.py: Add MINIMAX constant to LLMProviderType, include in get_all_types() and get_provider_defaults() with API base, default model, and model list endpoint
  • mcpgateway/llm_provider_configs.py: Add MiniMax provider config definition with API key requirement and default base URL
  • docs/docs/using/clients/llm-chat.md: Add MiniMax to the supported providers table
  • tests/unit/mcpgateway/test_minimax_provider.py: 21 unit tests covering enum registration, config registry, schema validation, proxy routing, and provider service integration
  • tests/integration/test_minimax_provider.py: 5 integration tests verifying end-to-end chat completion flow, config consistency, and multi-model support

Why MiniMax?

MiniMax is a leading AI company offering high-performance LLM models via an OpenAI-compatible API. Their M2.7 model features a 1M token context window, making it suitable for long-context applications. Since the API is OpenAI-compatible, MiniMax routes through the existing _build_openai_request path with zero changes to the proxy service.

Testing

All 26 new tests pass, and all 181 existing LLM-related tests pass with no regressions:

Add MiniMax AI as the 13th supported LLM provider in ContextForge.
MiniMax offers OpenAI-compatible API endpoints at api.minimax.io,
supporting models like MiniMax-M2.7 and MiniMax-M2.7-highspeed with
up to 1M token context windows.

Changes:
- Add MINIMAX enum to LLMProviderTypeEnum and LLMProviderType
- Add MiniMax provider config with API base and key settings
- Add MiniMax to provider defaults with model list support
- Add MiniMax to supported providers in LLM Chat docs
- Add 21 unit tests and 5 integration tests

Signed-off-by: octo-patch <octopatch.dev@gmail.com>
Signed-off-by: PR Bot <pr-bot@minimaxi.com>
@crivetimihai crivetimihai added enhancement New feature or request COULD P3: Nice-to-have features with minimal impact if left out; included if time permits labels Mar 29, 2026
@crivetimihai crivetimihai added this to the Release 1.2.0 milestone Mar 29, 2026
@crivetimihai
Copy link
Copy Markdown
Member

Thanks @octo-patch. Clean implementation — follows the existing provider pattern well, and the integration tests with mocked HTTP are a nice touch.

@octo-patch
Copy link
Copy Markdown
Author

Thank you @crivetimihai! Glad the implementation fits the existing patterns. Happy to address any additional feedback if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

COULD P3: Nice-to-have features with minimal impact if left out; included if time permits enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants