-
Notifications
You must be signed in to change notification settings - Fork 237
Latent MOE support and patch TransformerLayer forward for MOE #768
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: jenchen13 <jennifchen@nvidia.com>
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #768 +/- ##
==========================================
- Coverage 74.62% 74.62% -0.01%
==========================================
Files 192 192
Lines 18989 18992 +3
==========================================
+ Hits 14171 14172 +1
- Misses 4818 4820 +2 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Signed-off-by: jenchen13 <jennifchen@nvidia.com>
Signed-off-by: jenchen13 <jennifchen@nvidia.com>
| output = super().forward(hidden_states) | ||
| self.router.topk = original_top_k | ||
| return output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jenchen13 we should not do this - the outputs computed with all expert forcing is useless for next layer
| output = super().forward(hidden_states) | |
| self.router.topk = original_top_k | |
| return output | |
| super().forward(hidden_states) | |
| self.router.topk = original_top_k |
| # Step 1: Sync amax across local experts in a SequentialMLP | ||
| for name, module in model.named_modules(): | ||
| if hasattr(module, "sync_moe_local_experts_amax"): | ||
| module.sync_moe_local_experts_amax() | ||
|
|
||
| # TODO just for testing | ||
| if "experts" in name and "weight_quantizer" in name: | ||
| assert child.amax is not None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we move this before if distributed sync check? This is not doing anything particular to distributed sync
| output = super()._forward_mlp_moe_preprocess(hidden_states) | ||
| self.mlp.router.topk = original_top_k | ||
| return output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this PR do?
Type of change: New feature
Overview:
New nemotron models use
TransformerLayer.forward()instead ofMoELayer.forward()for MOE. This is a breaking change to our quantization implementation for Nano3 which relied on patchingMoELayer.forward()to force tokens to be routed to all experts during calibration.TransformerLayer.forward()which forces tokens to be routed to all experts during PTQ calibration? TODO Potentially remove MoELayer quant config if all MOEs in future will use TransformerLayer instead?
Usage
# Add a code snippet demonstrating how to use thisTesting
Before your PR is "Ready for review"
Additional Information