Skip to content

feat: add MiniMax as first-class LLM provider#674

Open
octo-patch wants to merge 1 commit intostakpak:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#674
octo-patch wants to merge 1 commit intostakpak:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as a built-in LLM provider alongside OpenAI, Anthropic, and Gemini. MiniMax offers high-performance models (M2.7 with 1M context, M2.5 with 204K context) through an OpenAI-compatible API at https://api.minimax.io/v1.

What's included

  • New provider module (libs/ai/src/providers/minimax/):

    • types.rsMiniMaxConfig with API key + base URL, MINIMAX_API_KEY env auto-detect
    • convert.rs – SDK→OpenAI request conversion with temperature clamping (MiniMax rejects temperature=0.0, clamped to 0.01)
    • stream.rs – SSE streaming via reqwest_eventsource, tool-call state tracking
    • provider.rs – Full Provider trait implementation: generate, stream, list_models, build_headers
    • Static model catalogue: MiniMax-M2.7, MiniMax-M2.7-highspeed, MiniMax-M2.5, MiniMax-M2.5-highspeed
  • SDK wiring:

    • InferenceConfig.minimax() and .minimax_config() builder methods
    • ClientBuilder – MiniMax registration in with_inference_config()
    • ProviderRegistry – Auto-registration from MINIMAX_API_KEY env var
    • ProviderKindMiniMax variant with "minimax" string conversion
  • Shared library:

    • ProviderConfig::MiniMax variant with api_key, api_endpoint, auth fields
    • stakai_adapter.rsbuild_inference_config() and build_provider_registry_direct() arms
  • CLI integration:

    • stakpak auth login --provider minimax --api-key $MINIMAX_API_KEY
    • Config profile template (generate_minimax_profile())
    • BuiltinProvider::MiniMax enum variant
    • Endpoint cleaning, env-var auth resolution, TUI display info
  • Documentation: README updated with MiniMax in provider list, auth login example, BYOK config

Tests

  • 23 unit tests: temperature clamping, request conversion, message parsing, tool calls, provider config, error parsing, static models, headers
  • 3 integration tests (require MINIMAX_API_KEY): generate, streaming, system message – all passing

Design decisions

  • Reuses OpenAI wire types (ChatCompletionRequest, ChatMessage, etc.) to minimize code duplication
  • Temperature clamping: 0.0 → 0.01 (MiniMax requires (0.0, 1.0] range)
  • Static model list (MiniMax doesn't expose a /v1/models endpoint)
  • Follows the same patterns as existing Gemini and Stakpak providers

Test plan

  • All 23 MiniMax unit tests pass
  • All 3 integration tests pass with MINIMAX_API_KEY
  • Full workspace cargo check passes
  • Existing test suite unaffected (all 48 unit + 33 doc tests pass)
  • Verify CI pipeline passes

Add MiniMax (https://api.minimax.io) as a built-in provider alongside
OpenAI, Anthropic, and Gemini. MiniMax uses an OpenAI-compatible API,
so the implementation reuses existing OpenAI wire types while adding
MiniMax-specific temperature clamping (0.0 → 0.01, since MiniMax
rejects temperature=0.0) and a static model catalogue.

Models: MiniMax-M2.7, MiniMax-M2.7-highspeed (1M context),
        MiniMax-M2.5, MiniMax-M2.5-highspeed (204K context).

Changes:
- New provider module: libs/ai/src/providers/minimax/ (types, convert,
  stream, provider) with 23 unit tests and 3 integration tests
- Provider trait impl: generate, stream, list_models, build_headers
- SDK wiring: InferenceConfig, ClientBuilder, ProviderRegistry (auto-
  registers from MINIMAX_API_KEY env var), ProviderKind enum
- Shared lib: ProviderConfig::MiniMax variant, stakai_adapter arms
- CLI: auth login support, config profile template, endpoint cleaning
- README: MiniMax in provider list, auth login example, BYOK config
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant