feat: add MiniMax as first-class LLM provider#3866
Open
octo-patch wants to merge 1 commit intoIBM:mainfrom
Open
feat: add MiniMax as first-class LLM provider#3866octo-patch wants to merge 1 commit intoIBM:mainfrom
octo-patch wants to merge 1 commit intoIBM:mainfrom
Conversation
Add MiniMax AI as the 13th supported LLM provider in ContextForge. MiniMax offers OpenAI-compatible API endpoints at api.minimax.io, supporting models like MiniMax-M2.7 and MiniMax-M2.7-highspeed with up to 1M token context windows. Changes: - Add MINIMAX enum to LLMProviderTypeEnum and LLMProviderType - Add MiniMax provider config with API base and key settings - Add MiniMax to provider defaults with model list support - Add MiniMax to supported providers in LLM Chat docs - Add 21 unit tests and 5 integration tests Signed-off-by: octo-patch <octopatch.dev@gmail.com> Signed-off-by: PR Bot <pr-bot@minimaxi.com>
Member
|
Thanks @octo-patch. Clean implementation — follows the existing provider pattern well, and the integration tests with mocked HTTP are a nice touch. |
Author
|
Thank you @crivetimihai! Glad the implementation fits the existing patterns. Happy to address any additional feedback if needed. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Add MiniMax AI as the 13th supported LLM provider in ContextForge Gateway. MiniMax provides an OpenAI-compatible API at
api.minimax.io, offering models like MiniMax-M2.7 and MiniMax-M2.7-highspeed with up to 1M token context windows, chat completion, streaming, and function calling support.Changes
mcpgateway/llm_schemas.py: AddMINIMAX = "minimax"toLLMProviderTypeEnummcpgateway/db.py: AddMINIMAXconstant toLLMProviderType, include inget_all_types()andget_provider_defaults()with API base, default model, and model list endpointmcpgateway/llm_provider_configs.py: Add MiniMax provider config definition with API key requirement and default base URLdocs/docs/using/clients/llm-chat.md: Add MiniMax to the supported providers tabletests/unit/mcpgateway/test_minimax_provider.py: 21 unit tests covering enum registration, config registry, schema validation, proxy routing, and provider service integrationtests/integration/test_minimax_provider.py: 5 integration tests verifying end-to-end chat completion flow, config consistency, and multi-model supportWhy MiniMax?
MiniMax is a leading AI company offering high-performance LLM models via an OpenAI-compatible API. Their M2.7 model features a 1M token context window, making it suitable for long-context applications. Since the API is OpenAI-compatible, MiniMax routes through the existing
_build_openai_requestpath with zero changes to the proxy service.Testing
All 26 new tests pass, and all 181 existing LLM-related tests pass with no regressions: