Set tensor-parallel attributes irrespective of perform_initialization#4084
Merged
FDecaYed merged 3 commits intoNVIDIA:mainfrom Apr 3, 2026
Merged
Set tensor-parallel attributes irrespective of perform_initialization#4084FDecaYed merged 3 commits intoNVIDIA:mainfrom
FDecaYed merged 3 commits intoNVIDIA:mainfrom
Conversation
Contributor
|
This PR has been automatically converted to draft because all PRs must start as drafts. When you are ready for review, click Ready for Review to begin the review process. This will:
See the contribution guide for more details. |
6af361e to
1eab659
Compare
jaredcasper
approved these changes
Mar 31, 2026
1eab659 to
dc84bc1
Compare
Contributor
Author
|
/ok to test dc84bc1 |
Contributor
|
/ok to test 70def0f |
70def0f to
9fa50eb
Compare
ericharper
approved these changes
Apr 3, 2026
Cherry-pick of NVIDIA/Megatron-LM@a8bad4b441 (ADLR/megatron-lm!4312). Ensures `set_tensor_model_parallel_attributes` is called on weights even when `perform_initialization=False`, so that downstream code relying on `tensor_model_parallel`, `partition_dim`, and `partition_stride` attributes works correctly regardless of the initialization path. The experts.py changes from the upstream commit were excluded because the local codebase does not contain the GroupedMLP class they target. Co-authored-by: yaoyu-33 <yaoyu.094@gmail.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: Deyu Fu <deyuf@nvidia.com>
9fa50eb to
c8f2838
Compare
Contributor
Author
|
/ok to test c8f2838 |
Contributor
Author
|
/ok to test 4a57f4e |
|
🔄 Merge queue validation started! You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/23959990923 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Cherry-pick of a8bad4b441 (ADLR/megatron-lm!4312).
Ensures
set_tensor_model_parallel_attributesis called on weights even whenperform_initialization=False, so that downstream code relying ontensor_model_parallel,partition_dim, andpartition_strideattributes works correctly regardless of the initialization path.The experts.py changes from the upstream commit were excluded because the local codebase does not contain the GroupedMLP class they target.
What does this PR do ?
Contribution process
Pre-checks
Code review
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.
Step 1: Mark PR as "Ready for Review"
.github/CODEOWNERS.Final Review might get declined if these requirements are not fulfilled.
Step 2: Final Review
For PRs that change
megatron/core, once all expert reviewers have approved, theFinal Reviewlabel is applied automatically and final reviewers are assigned.For PRs outside
megatron/core, this step is skipped.Step 3: Approved
Once all required reviewers have approved, the
Approvedlabel is applied automatically.Merge
Any member of mcore-engineers will be able to merge your PR.
For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.