feat: add MiniMax as alternative LLM provider for RAG dataflows#1523
feat: add MiniMax as alternative LLM provider for RAG dataflows#1523octo-patch wants to merge 2 commits intoapache:mainfrom
Conversation
Add MiniMax M2.7 as an alternative LLM provider alongside OpenAI in both faiss_rag and conversational_rag dataflows using Hamilton's @config.when pattern. Changes: - Use @config.when_not(provider="minimax") for OpenAI (backward-compatible default) - Use @config.when(provider="minimax") for MiniMax via OpenAI-compatible API - Update valid_configs.jsonl with minimax configuration - Update tags.json with minimax tag - Update README.md with MiniMax usage documentation - Add 35 unit tests + 6 integration tests MiniMax M2.7 features: - 1M token context window - OpenAI-compatible API at https://api.minimax.io/v1 - Configurable via MINIMAX_API_KEY environment variable
|
|
||
|
|
||
| @config.when_not(provider="minimax") | ||
| def conversational_rag_response__openai(answer_prompt: str, llm_client: openai.OpenAI) -> str: |
There was a problem hiding this comment.
we can simplify the code here to make it DRYer. If we raise model to be an input then this function doesn't need to be touched.
There was a problem hiding this comment.
Good point! I've refactored both files to extract model as a config-driven input. Now llm_client and model each have __openai / __minimax config variants, while standalone_question, conversational_rag_response, and rag_response are single shared functions that accept model as a parameter. This eliminates the duplication entirely.
| :return: the response from the LLM. | ||
| """ | ||
| response = llm_client.chat.completions.create( | ||
| model="MiniMax-M2.7", |
There was a problem hiding this comment.
can we move this up as an input parameter and simplify this code a little.
There was a problem hiding this comment.
Done! Extracted model as an input parameter with config variants. rag_response is now a single function that takes model: str.
skrawcz
left a comment
There was a problem hiding this comment.
looks good. thanks! can we just abstract one function a little more to reduce duplication please?
Address reviewer feedback: raise `model` to be a config-driven input parameter so response functions (standalone_question, rag_response, conversational_rag_response) don't need to be duplicated per provider. Only llm_client and model need config variants; the rest are shared. Co-Authored-By: Octopus <liyuan851277048@icloud.com>
Summary
Add MiniMax as an alternative LLM provider alongside OpenAI in both faiss_rag and conversational_rag contrib dataflows, using Hamilton's native @config.when pattern for provider switching.
Usage
Switch to MiniMax by setting MINIMAX_API_KEY and passing {"provider": "minimax"} in config:
Why MiniMax?
MiniMax offers high-performance models with large context windows (up to 1M tokens) via an OpenAI-compatible API, making it a drop-in alternative for OpenAI in RAG pipelines. The M2.7 model provides strong reasoning capabilities at competitive pricing.
Files Changed (10 files)
faiss_rag (5 files):
conversational_rag (5 files):
Test plan