Skip to content

Conversation

@jenchen13
Copy link
Contributor

@jenchen13 jenchen13 commented Jan 13, 2026

What does this PR do?

Type of change: New feature

Overview:
New nemotron models use TransformerLayer.forward() instead of MoELayer.forward() for MOE. This is a breaking change to our quantization implementation for Nano3 which relied on patching MoELayer.forward() to force tokens to be routed to all experts during calibration.

  • Add patch for TransformerLayer.forward() which forces tokens to be routed to all experts during PTQ calibration
  • Enable latent MOE modules during megatron import/export
  • Improvements to EP amax sync

? TODO Potentially remove MoELayer quant config if all MOEs in future will use TransformerLayer instead?

Usage

# Add a code snippet demonstrating how to use this

Testing

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes/No
  • Did you write any new necessary tests?: Yes/No
  • Did you add or update any necessary documentation?: Yes/No
  • Did you update Changelog?: Yes/No

Additional Information

Signed-off-by: jenchen13 <jennifchen@nvidia.com>
@copy-pr-bot
Copy link

copy-pr-bot bot commented Jan 13, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 13, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Jan 13, 2026

Codecov Report

❌ Patch coverage is 40.00000% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 74.62%. Comparing base (5104513) to head (4471c03).
⚠️ Report is 3 commits behind head on main.

Files with missing lines Patch % Lines
modelopt/torch/quantization/model_calib.py 40.00% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #768      +/-   ##
==========================================
- Coverage   74.62%   74.62%   -0.01%     
==========================================
  Files         192      192              
  Lines       18989    18992       +3     
==========================================
+ Hits        14171    14172       +1     
- Misses       4818     4820       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Signed-off-by: jenchen13 <jennifchen@nvidia.com>
Signed-off-by: jenchen13 <jennifchen@nvidia.com>
@jenchen13 jenchen13 changed the title Latent MOE support and fix MOE amax sync Latent MOE support and patch TransformerLayer forward for MOE Jan 13, 2026
Comment on lines +766 to +768
output = super().forward(hidden_states)
self.router.topk = original_top_k
return output
Copy link
Contributor

@realAsma realAsma Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jenchen13 we should not do this - the outputs computed with all expert forcing is useless for next layer

Suggested change
output = super().forward(hidden_states)
self.router.topk = original_top_k
return output
super().forward(hidden_states)
self.router.topk = original_top_k

Comment on lines +98 to +105
# Step 1: Sync amax across local experts in a SequentialMLP
for name, module in model.named_modules():
if hasattr(module, "sync_moe_local_experts_amax"):
module.sync_moe_local_experts_amax()

# TODO just for testing
if "experts" in name and "weight_quantizer" in name:
assert child.amax is not None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we move this before if distributed sync check? This is not doing anything particular to distributed sync

Comment on lines +783 to +785
output = super()._forward_mlp_moe_preprocess(hidden_states)
self.mlp.router.topk = original_top_k
return output
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants