MAF-19543: feat(preset): set --max-model-len default to -1#102
Merged
Conversation
Let vLLM auto-determine the maximum context length from the model config instead of hardcoding conservative values. This avoids unnecessarily limiting the usable context window when there are no memory constraints. Excludes deepseek-r1 PD disaggregation presets which retain their current values due to memory constraints. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Contributor
There was a problem hiding this comment.
Pull request overview
This PR updates vLLM-based Helm preset templates to set --max-model-len to -1, allowing vLLM to auto-determine context length from the model config rather than using preset-specific hardcoded values.
Changes:
- Switched multiple vLLM v0.15.1 presets (including GPT-OSS 120B, Kimi K2.5, DeepSeek-R1 variants) to
--max-model-len -1. - Switched GLM5 presets to
--max-model-len -1. - Switched many “quickstart” vLLM presets (AMD MI250/MI300X) to
--max-model-len -1, including DeepSeek-R1 PD prefill/decode presets.
Reviewed changes
Copilot reviewed 77 out of 77 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| deploy/helm/moai-inference-preset/templates/presets/vllm/v0.15.1/openai-gpt-oss-120b-nvidia-h200-sxm-tp2-moe-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/v0.15.1/openai-gpt-oss-120b-nvidia-h200-sxm-1.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/v0.15.1/openai-gpt-oss-120b-nvidia-h100-sxm-tp8-moe-tp8.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/v0.15.1/openai-gpt-oss-120b-nvidia-h100-sxm-tp4-moe-tp4.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/v0.15.1/openai-gpt-oss-120b-nvidia-h100-sxm-tp2-moe-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/v0.15.1/moonshotai-kimi-k2.5-nvidia-h200-sxm-tp8-moe-tp8.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/v0.15.1/moonshotai-kimi-k2.5-nvidia-h200-sxm-tp8-moe-ep8.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/v0.15.1/moonshotai-kimi-k2.5-nvidia-h100-sxm-tp8-moe-ep8.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/v0.15.1/deepseek-ai-deepseek-r1-nvidia-h200-sxm-tp8-moe-tp8.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/v0.15.1/deepseek-ai-deepseek-r1-nvidia-h200-sxm-tp8-moe-ep8.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/v0.15.1/deepseek-ai-deepseek-r1-nvidia-h200-sxm-dp8-moe-ep8.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/v0.15.1/deepseek-ai-deepseek-r1-nvidia-h200-sxm-dp16-moe-ep16.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/v0.15.1/deepseek-ai-deepseek-r1-nvidia-h100-sxm-dp16-moe-ep16.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/glm5/zai-org-glm-4.7-flash-nvidia-h200-sxm-tp2-moe-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/glm5/zai-org-glm-4.7-flash-nvidia-h200-sxm-1.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/glm5/zai-org-glm-4.7-flash-nvidia-h100-sxm-tp4-moe-tp4.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/glm5/zai-org-glm-4.7-flash-nvidia-h100-sxm-tp2-moe-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/vllm/glm5/zai-org-glm-4.7-flash-nvidia-h100-sxm-1.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen3-vl-8b-instruct-prefill-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen3-vl-8b-instruct-prefill-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen3-vl-8b-instruct-decode-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen3-vl-8b-instruct-decode-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen3-vl-8b-instruct-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen3-vl-8b-instruct-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen3-1.7b-prefill-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen3-1.7b-prefill-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen3-1.7b-decode-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen3-1.7b-decode-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen3-1.7b-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen3-1.7b-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen2.5-1.5b-instruct-prefill-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen2.5-1.5b-instruct-prefill-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen2.5-1.5b-instruct-decode-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen2.5-1.5b-instruct-decode-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen2.5-1.5b-instruct-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen2.5-1.5b-instruct-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen2-0.5b-instruct-prefill-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen2-0.5b-instruct-prefill-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen2-0.5b-instruct-decode-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen2-0.5b-instruct-decode-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen2-0.5b-instruct-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-qwen-qwen2-0.5b-instruct-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-openai-gpt-oss-20b-prefill-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-openai-gpt-oss-20b-prefill-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-openai-gpt-oss-20b-decode-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-openai-gpt-oss-20b-decode-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-openai-gpt-oss-20b-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-openai-gpt-oss-20b-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-mistralai-mistral-7b-instruct-v0.3-prefill-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-mistralai-mistral-7b-instruct-v0.3-prefill-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-mistralai-mistral-7b-instruct-v0.3-decode-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-mistralai-mistral-7b-instruct-v0.3-decode-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-mistralai-mistral-7b-instruct-v0.3-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-mistralai-mistral-7b-instruct-v0.3-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-microsoft-phi-mini-moe-instruct-prefill-amd-mi250-dp2-moe-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-microsoft-phi-mini-moe-instruct-decode-amd-mi250-dp2-moe-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-microsoft-phi-mini-moe-instruct-amd-mi250-dp2-moe-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-meta-llama-llama-3.2-1b-instruct-prefill-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-meta-llama-llama-3.2-1b-instruct-prefill-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-meta-llama-llama-3.2-1b-instruct-decode-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-meta-llama-llama-3.2-1b-instruct-decode-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-meta-llama-llama-3.2-1b-instruct-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-meta-llama-llama-3.2-1b-instruct-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-ibm-granite-granite-3.3-8b-instruct-prefill-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-ibm-granite-granite-3.3-8b-instruct-prefill-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-ibm-granite-granite-3.3-8b-instruct-decode-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-ibm-granite-granite-3.3-8b-instruct-decode-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-ibm-granite-granite-3.3-8b-instruct-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-ibm-granite-granite-3.3-8b-instruct-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-deepseek-ai-deepseek-r1-prefill-amd-mi300x-dp8-moe-ep8.helm.yaml | Set --max-model-len to -1 (PD prefill) |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-deepseek-ai-deepseek-r1-decode-amd-mi300x-dp8-moe-ep8.helm.yaml | Set --max-model-len to -1 (PD decode) |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-deepseek-ai-deepseek-r1-distill-llama-8b-prefill-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-deepseek-ai-deepseek-r1-distill-llama-8b-prefill-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-deepseek-ai-deepseek-r1-distill-llama-8b-decode-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-deepseek-ai-deepseek-r1-distill-llama-8b-decode-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-deepseek-ai-deepseek-r1-distill-llama-8b-amd-mi300x-tp2.helm.yaml | Set --max-model-len to -1 |
| deploy/helm/moai-inference-preset/templates/presets/quickstart/quickstart-vllm-deepseek-ai-deepseek-r1-distill-llama-8b-amd-mi250-tp2.helm.yaml | Set --max-model-len to -1 |
.../quickstart/quickstart-vllm-deepseek-ai-deepseek-r1-prefill-amd-mi300x-dp8-moe-ep8.helm.yaml
Show resolved
Hide resolved
.../quickstart/quickstart-vllm-deepseek-ai-deepseek-r1-prefill-amd-mi300x-dp8-moe-ep8.helm.yaml
Show resolved
Hide resolved
...s/quickstart/quickstart-vllm-deepseek-ai-deepseek-r1-decode-amd-mi300x-dp8-moe-ep8.helm.yaml
Show resolved
Hide resolved
...s/quickstart/quickstart-vllm-deepseek-ai-deepseek-r1-decode-amd-mi300x-dp8-moe-ep8.helm.yaml
Show resolved
Hide resolved
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
--max-model-lento-1across all quickstart, vLLM v0.15.1, and GLM5 presets (77 files)deepseek-r1/PD disaggregation presets (retain16384due to memory constraints)Test plan
helm templaterenders without errors🤖 Generated with Claude Code