Skip to content

[Common, PyTorch] Improve mHC to match DeepSeek's implementation#2953

Draft
kainzhong wants to merge 10 commits intoNVIDIA:mainfrom
kainzhong:feat/mhc_enhancement
Draft

[Common, PyTorch] Improve mHC to match DeepSeek's implementation#2953
kainzhong wants to merge 10 commits intoNVIDIA:mainfrom
kainzhong:feat/mhc_enhancement

Conversation

@kainzhong
Copy link
Copy Markdown
Collaborator

@kainzhong kainzhong commented May 1, 2026

Description

Some enhancement for mHC to better align with DeepSeek's tilelang implementation: https://github.com/deepseek-ai/TileKernels/tree/main/tile_kernels/mhc

Fixes # (issue)

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

  • Allow mhc_fused_projection to accept arguments with mixed dtype: x.dtype=bf16, phi.dtype=fp32, which matches DeepSeek's implementation
  • mhc_fused_projection now outputs fp32 regardless of the input dtype, matching DeepSeek's implementation
  • Add fuse_grad_x_acc optimization (default to False), which will reuse the same grad_x buffer to accumulate the initial mHC input x's gradient for mhc_fused_expand_combine, mhc_fused_aggregate and mhc_fused_projection
  • Support norm_weight for mhc_fused_projection, which would be equivalent to apply RMSNorm in the unfused manner with elementwise_affine=True, which would be the learnable per-element affine parameters for RMSNorm
  • Support deterministic mode. In addition, when the non-split-K outperforms split-K, kernels will choose non-split-K even if deterministic is not enforced (based on some empirical heuristics )
  • Refactor some kernel code to avoid duplication. I just realized if you make a triton kernel constexpr, it can be used as a macro in if branches since triton will not compile if it knows in compile time that some branch will not be taken
  • Fix the bug that grid will exceed CUDA's limitation when M is too large and the autotune candidate is BLOCK_SIZE_M. Such invalid configs will be pruned now.
  • Add main_grad optimization for Megatron-LM integration, which will accumulate the gradient of phi, alpha, beta (they are all supposed to be torch.nn.Parameter) to main_grad if such attribute exists. Removed because it's too much complexity and this parameter is not too large so this optimization only leads to negligible win
  • [TODO]: add checkpoint recomputing fused kernel to match DeepSeek's implementation Probably need to figure out how to implement this in Megatron first since it doesn't seem too straightforward
  • [TODO]: add a fused projection + aggregate only kernel (no expand & combine path) for the last mHC layer, which seems to be also used for MTP. See function learned_output_contract in [dev] [DeepSeek-v4] Part 3: MTP support with mHC and new mHC contract Megatron-LM#4518 That kernel only takes 0.06ms for fwd+bwd and I don't think a triton kernel can have too much meaningful win

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

@kainzhong kainzhong changed the title [Common, PyTorch] Enhancement for mHC to match DeepSeek's Tilelang im… [Common, PyTorch] Improve mHC to match DeepSeek's implementation May 1, 2026
@kainzhong kainzhong changed the title [Common, PyTorch] Improve mHC to match DeepSeek's implementation [WIP][Common, PyTorch] Improve mHC to match DeepSeek's implementation May 1, 2026
Comment thread transformer_engine/pytorch/triton/mhc.py Outdated
Comment thread transformer_engine/pytorch/triton/mhc.py
Comment thread transformer_engine/pytorch/triton/mhc.py Outdated
Comment thread transformer_engine/pytorch/triton/mhc.py Outdated
…plementation

Signed-off-by: Kaining Zhong <kainingz@nvidia.com>
@kainzhong kainzhong force-pushed the feat/mhc_enhancement branch from 5b044f6 to d7e1199 Compare May 5, 2026 23:48
pre-commit-ci Bot and others added 6 commits May 5, 2026 23:48
Signed-off-by: Kaining Zhong <kainingz@nvidia.com>
Signed-off-by: Kaining Zhong <kainingz@nvidia.com>
Signed-off-by: Kaining Zhong <kainingz@nvidia.com>
Signed-off-by: Kaining Zhong <kainingz@nvidia.com>
@kainzhong kainzhong changed the title [WIP][Common, PyTorch] Improve mHC to match DeepSeek's implementation [Common, PyTorch] Improve mHC to match DeepSeek's implementation May 6, 2026
kainzhong added 3 commits May 6, 2026 01:09
Signed-off-by: Kaining Zhong <kainingz@nvidia.com>
Signed-off-by: Kaining Zhong <kainingz@nvidia.com>
Signed-off-by: Kaining Zhong <kainingz@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant