Skip to content

Set tensor-parallel attributes irrespective of perform_initialization#4084

Merged
FDecaYed merged 3 commits intoNVIDIA:mainfrom
ilml:cherry-pick-tp-attrs-without-init
Apr 3, 2026
Merged

Set tensor-parallel attributes irrespective of perform_initialization#4084
FDecaYed merged 3 commits intoNVIDIA:mainfrom
ilml:cherry-pick-tp-attrs-without-init

Conversation

@ilml
Copy link
Copy Markdown
Contributor

@ilml ilml commented Mar 31, 2026

Cherry-pick of a8bad4b441 (ADLR/megatron-lm!4312).

Ensures set_tensor_model_parallel_attributes is called on weights even when perform_initialization=False, so that downstream code relying on tensor_model_parallel, partition_dim, and partition_stride attributes works correctly regardless of the initialization path.

The experts.py changes from the upstream commit were excluded because the local codebase does not contain the GroupedMLP class they target.

What does this PR do ?

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Mar 31, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@ilml ilml requested review from a team as code owners March 31, 2026 21:25
@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft March 31, 2026 21:25
@github-actions
Copy link
Copy Markdown
Contributor

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@ilml ilml marked this pull request as ready for review March 31, 2026 21:27
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team March 31, 2026 21:27
@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Final Review PR is in the "final review" stage label Mar 31, 2026
@ilml ilml force-pushed the cherry-pick-tp-attrs-without-init branch from 6af361e to 1eab659 Compare March 31, 2026 21:28
@ilml ilml force-pushed the cherry-pick-tp-attrs-without-init branch from 1eab659 to dc84bc1 Compare March 31, 2026 22:48
@ilml
Copy link
Copy Markdown
Contributor Author

ilml commented Mar 31, 2026

/ok to test dc84bc1

Copy link
Copy Markdown
Contributor

@FDecaYed FDecaYed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. fixing CR header

@FDecaYed FDecaYed enabled auto-merge April 3, 2026 06:53
@FDecaYed
Copy link
Copy Markdown
Contributor

FDecaYed commented Apr 3, 2026

/ok to test 70def0f

@FDecaYed FDecaYed removed the request for review from a team April 3, 2026 07:00
@ilml ilml force-pushed the cherry-pick-tp-attrs-without-init branch from 70def0f to 9fa50eb Compare April 3, 2026 17:40
@svcnvidia-nemo-ci svcnvidia-nemo-ci added Approved All necessary approvals have been made and removed Final Review PR is in the "final review" stage labels Apr 3, 2026
Tom Long and others added 2 commits April 3, 2026 11:57
Cherry-pick of NVIDIA/Megatron-LM@a8bad4b441 (ADLR/megatron-lm!4312).

Ensures `set_tensor_model_parallel_attributes` is called on weights even
when `perform_initialization=False`, so that downstream code relying on
`tensor_model_parallel`, `partition_dim`, and `partition_stride` attributes
works correctly regardless of the initialization path.

The experts.py changes from the upstream commit were excluded because
the local codebase does not contain the GroupedMLP class they target.

Co-authored-by: yaoyu-33 <yaoyu.094@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: Deyu Fu <deyuf@nvidia.com>
@ilml ilml force-pushed the cherry-pick-tp-attrs-without-init branch from 9fa50eb to c8f2838 Compare April 3, 2026 18:57
@ilml
Copy link
Copy Markdown
Contributor Author

ilml commented Apr 3, 2026

/ok to test c8f2838

@ilml
Copy link
Copy Markdown
Contributor Author

ilml commented Apr 3, 2026

/ok to test 4a57f4e

@FDecaYed FDecaYed added this pull request to the merge queue Apr 3, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/23959990923

Merged via the queue into NVIDIA:main with commit a72c027 Apr 3, 2026
61 of 63 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Approved All necessary approvals have been made complexity: low

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants