feat: add MiniMax chat model and TTS support#5891
feat: add MiniMax chat model and TTS support#5891ximiximi423 wants to merge 3 commits intoFlowiseAI:mainfrom
Conversation
ximiximi423
commented
Mar 4, 2026
- Add ChatMiniMax node with FlowiseChatMiniMax integration
- Add MiniMax api credential definition
- Update models.json with MiniMax model list
- Add MiniMax TTS support in textToSpeech.ts
- Add ChatMiniMax node with FlowiseChatMiniMax integration - Add MiniMaxApi credential definition - Update models.json with MiniMax model list - Add MiniMax TTS support in textToSpeech.ts Made-with: Cursor
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the platform's capabilities by integrating MiniMax's AI services. It introduces support for MiniMax chat models, allowing users to leverage their LLMs, and adds robust text-to-speech functionality powered by MiniMax. These additions provide more options for conversational AI and audio generation within the system. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces support for MiniMax chat models and Text-to-Speech services. The changes include adding a new credential type for MiniMax, a new chat model node, and integrating MiniMax TTS into the existing text-to-speech functionality. The implementation is solid, but I've identified a few areas for improvement. Specifically, there's a potential runtime issue with how the temperature parameter is handled in the new chat model, a misleading type assertion, and a potential bug in the Server-Sent Events (SSE) stream processing for TTS that could result in incomplete audio. I've included suggestions to address these points.
| const obj: ChatMiniMaxInput = { | ||
| temperature: parseFloat(temperature), | ||
| modelName, | ||
| miniMaxApiKey, | ||
| streaming: streaming ?? true | ||
| } | ||
|
|
||
| if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10) | ||
| if (topP) obj.topP = parseFloat(topP) | ||
| if (cache) obj.cache = cache |
There was a problem hiding this comment.
The temperature parameter is being unconditionally parsed and assigned. If a user clears this optional field in the UI, temperature could be undefined or an empty string. In both cases, parseFloat(temperature) would result in NaN, which is then passed to the model constructor. This is likely to cause a runtime error when making an API call. To prevent this, you should handle temperature conditionally, similar to how maxTokens and topP are handled.
| const obj: ChatMiniMaxInput = { | |
| temperature: parseFloat(temperature), | |
| modelName, | |
| miniMaxApiKey, | |
| streaming: streaming ?? true | |
| } | |
| if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10) | |
| if (topP) obj.topP = parseFloat(topP) | |
| if (cache) obj.cache = cache | |
| const obj: ChatMiniMaxInput = { | |
| modelName, | |
| miniMaxApiKey, | |
| streaming: streaming ?? true | |
| } | |
| if (temperature) obj.temperature = parseFloat(temperature) | |
| if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10) | |
| if (topP) obj.topP = parseFloat(topP) | |
| if (cache) obj.cache = cache |
|
|
||
| revertToOriginalModel(): void { | ||
| this.modelName = this.configuredModel | ||
| this.maxTokens = this.configuredMaxToken as number |
There was a problem hiding this comment.
The as number type assertion here is unnecessary and potentially misleading. The configuredMaxToken property is of type number | undefined, and the this.maxTokens property on the parent class also accepts undefined. At runtime, if configuredMaxToken is undefined, this cast will not prevent undefined from being assigned, but it suppresses TypeScript's type checking and could confuse future maintainers. Removing the cast improves clarity and type safety.
| this.maxTokens = this.configuredMaxToken as number | |
| this.maxTokens = this.configuredMaxToken |
| if (eventData.data?.status === 2) { | ||
| break | ||
| } |
There was a problem hiding this comment.
The break statement here exits the inner loop that processes Server-Sent Events (SSE) lines from the current data chunk. If there are more messages in the same chunk after the one with data.status === 2, they will be skipped, and the code will proceed to read the next chunk from the stream. This could lead to missed audio data if the provider sends more data in the same chunk. It's safer to remove this break and let the loop process all lines in the current buffer. The outer loop will terminate correctly when the stream is closed and reader.read() returns { done: true }.
- Make temperature conditional to avoid NaN when field is empty - Remove unnecessary 'as number' cast in revertToOriginalModel - Remove premature break in SSE loop to avoid missing audio chunks Made-with: Cursor
Made-with: Cursor