Skip to content

[Dev] Skip routed expert padding for graph-safe MoE#4071

Open
zhongbozhu wants to merge 1 commit intoNVIDIA:devfrom
zhongbozhu:fix_te_fused_group_mlp_padding
Open

[Dev] Skip routed expert padding for graph-safe MoE#4071
zhongbozhu wants to merge 1 commit intoNVIDIA:devfrom
zhongbozhu:fix_te_fused_group_mlp_padding

Conversation

@zhongbozhu
Copy link
Copy Markdown
Contributor

What does this PR do ?

This PR is a follow up fix to #3890, which introduced TE Fused group mlp layer for graph safe moe. However, padding itself as a stand-alone operation can also be graph-unsafe if the padding was implemented based on knowing problem shapes on host, which is the case for router padding and explicit Fp8Padding module.

HybridEp provides the ability of doing the padding fused with local permute, which is done on-device, so it's cuda graph safe. This PR checks whether padding has already been done so that we can safely skip the graph-unsafe padding, this will be used together with the paged stashing PR: #2690.

PR to main: TBD (waiting on #3971 to be merged to main)

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Mar 31, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@zhongbozhu zhongbozhu force-pushed the fix_te_fused_group_mlp_padding branch from 3d0fd34 to a10b0cc Compare March 31, 2026 04:59
Signed-off-by: Zhongbo Zhu <zhongboz@nvidia.com>
@zhongbozhu zhongbozhu marked this pull request as ready for review March 31, 2026 05:00
@zhongbozhu zhongbozhu requested review from a team as code owners March 31, 2026 05:00
@zhongbozhu zhongbozhu mentioned this pull request Mar 31, 2026
@buptzyb
Copy link
Copy Markdown
Contributor

buptzyb commented Mar 31, 2026

LGTM!

@yaox12 yaox12 enabled auto-merge March 31, 2026 10:00
@yaox12
Copy link
Copy Markdown
Member

yaox12 commented Mar 31, 2026

/ok to test a10b0cc

@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Mar 31, 2026
@yaox12 yaox12 added this pull request to the merge queue Mar 31, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/23793449235

@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Mar 31, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants