[Common, PyTorch] Improve mHC to match DeepSeek's implementation#2953
Draft
kainzhong wants to merge 10 commits intoNVIDIA:mainfrom
Draft
[Common, PyTorch] Improve mHC to match DeepSeek's implementation#2953kainzhong wants to merge 10 commits intoNVIDIA:mainfrom
kainzhong wants to merge 10 commits intoNVIDIA:mainfrom
Conversation
kainzhong
commented
May 1, 2026
kainzhong
commented
May 1, 2026
…plementation Signed-off-by: Kaining Zhong <kainingz@nvidia.com>
5b044f6 to
d7e1199
Compare
for more information, see https://pre-commit.ci
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Some enhancement for mHC to better align with DeepSeek's tilelang implementation: https://github.com/deepseek-ai/TileKernels/tree/main/tile_kernels/mhc
Fixes # (issue)
Type of change
Changes
mhc_fused_projectionto accept arguments with mixed dtype: x.dtype=bf16, phi.dtype=fp32, which matches DeepSeek's implementationmhc_fused_projectionnow outputs fp32 regardless of the input dtype, matching DeepSeek's implementationfuse_grad_x_accoptimization (default to False), which will reuse the same grad_x buffer to accumulate the initial mHC input x's gradient formhc_fused_expand_combine,mhc_fused_aggregateandmhc_fused_projectionnorm_weightformhc_fused_projection, which would be equivalent to apply RMSNorm in the unfused manner withelementwise_affine=True, which would be the learnable per-element affine parameters for RMSNormAddRemoved because it's too much complexity and this parameter is not too large so this optimization only leads to negligible winmain_gradoptimization for Megatron-LM integration, which will accumulate the gradient of phi, alpha, beta (they are all supposed to betorch.nn.Parameter) tomain_gradif such attribute exists.[TODO]: add checkpoint recomputing fused kernel to match DeepSeek's implementationProbably need to figure out how to implement this in Megatron first since it doesn't seem too straightforward[TODO]: add a fused projection + aggregate only kernel (no expand & combine path) for the last mHC layer, which seems to be also used for MTP. See functionThat kernel only takes 0.06ms for fwd+bwd and I don't think a triton kernel can have too much meaningful winlearned_output_contractin [dev] [DeepSeek-v4] Part 3: MTP support with mHC and new mHC contract Megatron-LM#4518Checklist: