Skip to content

Fix THD RoPE offsets for local packed shards#4075

Open
kaimo455 wants to merge 3 commits intoNVIDIA:mainfrom
kaimo455:codex/rope-thd-offsets
Open

Fix THD RoPE offsets for local packed shards#4075
kaimo455 wants to merge 3 commits intoNVIDIA:mainfrom
kaimo455:codex/rope-thd-offsets

Conversation

@kaimo455
Copy link
Copy Markdown

@kaimo455 kaimo455 commented Mar 31, 2026

What does this PR do ?

Preserves THD RoPE positions for local packed shards by honoring caller-provided offsets when freqs only contains max-sequence positions.

It also adds unit regressions for the local packed-shard offset path, the fused THD fallback path when offsets are provided, and the conversion of local offset scalars to Python int values before slicing.

Validation run for this PR:

  • python -m py_compile megatron/core/models/common/embeddings/rope_utils.py tests/unit_tests/transformer/test_rope.py
  • PYTHONPATH=. uv run --no-project --isolated --with torch --with numpy --with packaging --with click --with requests --with pyyaml --with pytest==8.3.5 python -m pytest tests/unit_tests/transformer/test_rope.py -k 'python_ints or offsets or fusion_when_offsets' -q
  • PYTHONPATH=. uv run --no-project --isolated --with black==24.4.2 --with isort==5.13.2 --with pylint==3.2.6 --with 'ruff~=0.9.0' --with mypy --with torch --with numpy --with packaging --with click --with requests --with pyyaml --with pytest==8.3.5 bash tools/autoformat.sh

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Mar 31, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@kaimo455 kaimo455 changed the title [codex] Fix THD RoPE offsets for local packed shards Fix THD RoPE offsets for local packed shards Mar 31, 2026
@kaimo455 kaimo455 marked this pull request as ready for review March 31, 2026 10:05
@kaimo455 kaimo455 requested review from a team as code owners March 31, 2026 10:05
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team March 31, 2026 10:05
@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Final Review PR is in the "final review" stage label Mar 31, 2026
@janEbert
Copy link
Copy Markdown
Contributor

/claude review

Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clean PR overall — one potential bug flagged inline: iterating over the offsets tensor passes 0-dim tensors to _get_thd_freqs_on_this_cp_rank which expects int. Should call .item() like CASE 1 does, especially to avoid issues with GPU tensors.

@janEbert
Copy link
Copy Markdown
Contributor

/ok to test ccbd017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community-request Final Review PR is in the "final review" stage

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants