-
Notifications
You must be signed in to change notification settings - Fork 590
Fix/moe_sm110 (to be tested) #2183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @aleozlx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a functional regression in the Mixture-of-Experts (MoE) Cutlass backend for SM110 architectures, which previously failed to find viable configurations. The fix involves adjusting the kernel dispatch logic and relaxing certain cluster shape constraints specifically for SM110, ensuring it correctly utilizes the MoE kernels. These changes were necessary due to recent updates intended for newer architectures that inadvertently affected SM110. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses a functional regression for SM110 in the MoE CUTLASS backend by adjusting the dispatch logic. The changes correctly move SM110 to a different dispatch path, aligning it with SM90 and SM120+ instead of SM100/103. A corresponding change is made to the tile shape validation to remove a failing constraint for SM110.
While the changes appear to correctly fix the issue, the patch to the validation logic reveals a structural inconsistency between how architectures are grouped for dispatch versus validation. I've left a comment with a suggestion to address this for better long-term maintainability.
| if constexpr (Arch::kMinComputeCapability != 110) { | ||
| // We use a runtime cluster shape for SM100, so we only support 1x1x1 and 2x1x1 cluster shapes. | ||
| if (cute::size<0>(ClusterShape{}) > 2 || cute::size<1>(ClusterShape{}) != 1 || | ||
| cute::size<2>(ClusterShape{}) != 1) { | ||
| return false; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While this change fixes the regression for SM110, special-casing it within are_tile_shapes_supported_sm100 highlights a structural inconsistency that could affect maintainability.
The changes in dispatchMoeGemmFinalDispatchTmaWarpSpecialized correctly group SM110 with SM90 and SM120+, separating it from the SM100/103 path. However, the validation logic in the calling function are_tile_shapes_supported still groups SM110 with SM100/103, which necessitates this patch.
This discrepancy makes the code harder to reason about, as the logic for SM110 is fragmented. For better long-term code health, the validation paths in are_tile_shapes_supported should be refactored to align with the dispatch paths. I recommend creating a follow-up technical debt issue to address this.
📌 Description
Fix sm110 moe (cutlass backend) functional regression : no viable configs
Found dispatch logic changed and extra cluster constraint from #2020 / #1925 trying to support sm103 sm120 new features. The logic haven't been tested on sm110 due to a current lack of CI resources
(to be tested)
🔍 Related Issues
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes