-
Notifications
You must be signed in to change notification settings - Fork 744
[Feature]【Hackathon 10th Spring No.47】Add MiniMax-M1 model support [cf] #7703
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,48 @@ | ||
| [简体中文](../zh/best_practices/MiniMax-M1.md) | ||
|
|
||
| # MiniMax-M1 Model | ||
|
|
||
| ## I. Environment Preparation | ||
|
|
||
| ### 1.1 Support Requirements | ||
|
|
||
| MiniMax-M1 support in FastDeploy uses a hybrid decoder stack: | ||
|
|
||
| - Standard full-attention layers run through the existing FastDeploy attention backend. | ||
| - Linear-attention layers use the Lightning Attention Triton kernels in `fastdeploy/model_executor/ops/triton_ops/lightning_attn.py`. | ||
| - Current first-pass support targets BF16 inference. | ||
|
|
||
| ### 1.2 Installing FastDeploy | ||
|
|
||
| Installation process reference document [FastDeploy GPU Installation](../get_started/installation/nvidia_gpu.md) | ||
|
|
||
| ## II. How to Use | ||
|
|
||
| ### 2.1 Basics: Starting the Service | ||
|
|
||
| ```shell | ||
| MODEL_PATH=/models/MiniMax-Text-01 | ||
|
|
||
| python -m fastdeploy.entrypoints.openai.api_server \ | ||
| --model "$MODEL_PATH" \ | ||
| --port 8180 \ | ||
| --metrics-port 8181 \ | ||
| --engine-worker-queue-port 8182 \ | ||
| --max-model-len 32768 \ | ||
| --max-num-seqs 32 | ||
| ``` | ||
|
|
||
| ### 2.2 Model Notes | ||
|
|
||
| - HuggingFace architecture: `MiniMaxText01ForCausalLM` | ||
| - Hybrid layer layout: 70 linear-attention layers and 10 full-attention layers | ||
| - MoE routing: 32 experts, top-2 experts per token | ||
|
|
||
| ## III. Known Limitations | ||
|
|
||
| - This initial integration is focused on model structure and backend wiring. | ||
| - Linear attention KV history uses instance variables, which needs migration to slot-based cache for proper multi-request isolation (TODO already noted in code). | ||
| - Low-bit quantization support still requires follow-up validation against MiniMax-M1 weights. | ||
| - Production validation should include GPU runtime checks for Lightning Attention decode/prefill paths. | ||
|
|
||
| <!-- Hackathon 10th Spring No.47 --> |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,48 @@ | ||
| [English](../../best_practices/MiniMax-M1.md) | ||
|
|
||
| # MiniMax-M1 模型 | ||
|
|
||
| ## 一、环境准备 | ||
|
|
||
| ### 1.1 支持说明 | ||
|
|
||
| FastDeploy 中的 MiniMax-M1 采用混合解码器结构: | ||
|
|
||
| - 全注意力层复用 FastDeploy 现有 Attention 后端。 | ||
| - 线性注意力层使用 `fastdeploy/model_executor/ops/triton_ops/lightning_attn.py` 中的 Lightning Attention Triton kernel。 | ||
| - 当前首版支持以 BF16 推理为主。 | ||
|
|
||
| ### 1.2 安装 FastDeploy | ||
|
|
||
| 安装流程可参考 [FastDeploy GPU 安装文档](../get_started/installation/nvidia_gpu.md) | ||
|
|
||
| ## 二、使用方式 | ||
|
|
||
| ### 2.1 基础启动命令 | ||
|
|
||
| ```shell | ||
| MODEL_PATH=/models/MiniMax-Text-01 | ||
|
|
||
| python -m fastdeploy.entrypoints.openai.api_server \ | ||
| --model "$MODEL_PATH" \ | ||
| --port 8180 \ | ||
| --metrics-port 8181 \ | ||
| --engine-worker-queue-port 8182 \ | ||
| --max-model-len 32768 \ | ||
| --max-num-seqs 32 | ||
| ``` | ||
|
|
||
| ### 2.2 模型特性 | ||
|
|
||
| - HuggingFace 架构名:`MiniMaxText01ForCausalLM` | ||
| - 层类型分布:70 层线性注意力 + 10 层全注意力 | ||
| - MoE 路由:32 个专家,每个 token 选择 top-2 专家 | ||
|
|
||
| ## 三、当前限制 | ||
|
|
||
| - 当前版本优先完成模型组网与后端接线。 | ||
| - 线性注意力的 KV history 当前使用实例变量存储,多请求并发场景下需迁移至 slot-based cache(已有 TODO 标注)。 | ||
| - 各类低比特量化推理能力还需要结合真实权重进一步验证。 | ||
| - Lightning Attention 的 prefill/decode 路径仍需在 GPU 环境完成端到端验证。 | ||
|
|
||
| <!-- Hackathon 10th Spring No.47 --> |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,4 +1,4 @@ | ||
| """ | ||
| """Module for Hackathon 10th Spring No.47. | ||
| # Copyright (c) 2025 PaddlePaddle Authors. All Rights Reserved. | ||
| # | ||
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||
|
|
@@ -341,7 +341,7 @@ def get_rope_impl( | |
| """ | ||
|
|
||
| architecture = model_config.architectures[0] | ||
This comment was marked as outdated.
Sorry, something went wrong. |
||
| if architecture.startswith("Qwen"): | ||
| if architecture.startswith("Qwen") or architecture.startswith("MiniMaxM1"): | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🔴 Bug 代码注册了两个架构名: 建议修复: if architecture.startswith("Qwen") or architecture.startswith("MiniMaxM1") or architecture.startswith("MiniMaxText01"): |
||
| rotary_emb_layer = QwenRotaryEmbedding(rotary_dim, base, partial_rotary_factor) | ||
| rotary_emb = rotary_emb_layer(position_ids) | ||
| elif architecture.startswith("Glm"): | ||
|
|
||
This comment was marked as outdated.
Sorry, something went wrong.
Uh oh!
There was an error while loading. Please reload this page.