Skip to content

feat: add MiniMax as a first-class LLM and TTS provider#2296

Open
octo-patch wants to merge 1 commit intoarc53:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as a first-class LLM and TTS provider#2296
octo-patch wants to merge 1 commit intoarc53:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as a native provider for both LLM inference and text-to-speech in DocsGPT, giving users access to MiniMax-M2.5 and MiniMax-M2.5-highspeed models (204K context window) via the OpenAI-compatible API.

What's included

  • LLM provider (application/llm/minimax.py): Extends OpenAILLM with MiniMax's base URL (https://api.minimax.io/v1), temperature clamping to the valid range (0, 1], and structured output passthrough disabled
  • TTS provider (application/tts/minimax_tts.py): MiniMax speech-2.8-hd model with hex-to-base64 audio decoding
  • Model registry: MiniMax-M2.5 and MiniMax-M2.5-highspeed with tool calling and image attachment support
  • Settings: MINIMAX_API_KEY environment variable with normalizer validation
  • Setup scripts: MiniMax option added to both bash and PowerShell setup wizards
  • Documentation: Updated cloud-providers.mdx table and README feature list
  • Tests: 10 LLM unit tests + 4 TTS unit tests, all passing

Files changed

File Change
application/llm/minimax.py New — MiniMax LLM provider
application/tts/minimax_tts.py New — MiniMax TTS provider
application/core/settings.py Add MINIMAX_API_KEY
application/core/model_settings.py Add MINIMAX to ModelProvider enum
application/core/model_configs.py Add MINIMAX_MODELS definitions
application/core/model_utils.py Add minimax to provider key map
application/llm/llm_creator.py Register MiniMaxLLM in factory
application/tts/tts_creator.py Register MiniMaxTTS in factory
.env-template Add MINIMAX_API_KEY
setup.sh / setup.ps1 Add MiniMax option
docs/content/Models/cloud-providers.mdx Add MiniMax row
README.md Add MiniMax to provider list
tests/llm/test_minimax_llm.py 10 unit tests
tests/tts/test_minimax_tts.py 4 unit tests

Test plan

  • All 10 MiniMax LLM unit tests pass (pytest tests/llm/test_minimax_llm.py)
  • All 4 MiniMax TTS unit tests pass (pytest tests/tts/test_minimax_tts.py)
  • Existing OpenAI LLM tests still pass (no regressions)
  • Existing TTS tests still pass (no regressions)
  • Integration check: LLMCreator.llms['minimax'] correctly resolves to MiniMaxLLM
  • Model configs validate: 2 models registered with correct provider enum

Add MiniMax (https://www.minimaxi.com) as a native provider for both LLM
inference and text-to-speech, giving users access to MiniMax-M2.5 and
MiniMax-M2.5-highspeed models (204K context window) via the OpenAI-compatible
API at api.minimax.io.

Changes:
- LLM provider (`application/llm/minimax.py`): extends OpenAILLM with
  MiniMax base URL, temperature clamping to (0, 1], and response_format
  passthrough disabled
- TTS provider (`application/tts/minimax_tts.py`): speech-2.8-hd model
  with hex-to-base64 audio decoding
- Model registry: MiniMax-M2.5 and MiniMax-M2.5-highspeed with tool
  calling and image attachment support
- Settings: MINIMAX_API_KEY environment variable with normalizer
- Setup scripts: MiniMax option in both bash and PowerShell wizards
- Documentation: cloud-providers.mdx table and README feature list
- Tests: 10 LLM unit tests + 4 TTS unit tests, all passing
@vercel
Copy link
Copy Markdown

vercel bot commented Mar 16, 2026

Someone is attempting to deploy a commit to the Arc53 Team on Vercel.

A member of the Team first needs to authorize it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant