Skip to content

[https://nvbugs/6055474][test] Fix RTX-6000 with wrong moe backend#12886

Merged
yufeiwu-nv merged 5 commits intoNVIDIA:mainfrom
yufeiwu-nv:fix_RTX
Apr 10, 2026
Merged

[https://nvbugs/6055474][test] Fix RTX-6000 with wrong moe backend#12886
yufeiwu-nv merged 5 commits intoNVIDIA:mainfrom
yufeiwu-nv:fix_RTX

Conversation

@yufeiwu-nv
Copy link
Copy Markdown
Collaborator

@yufeiwu-nv yufeiwu-nv commented Apr 9, 2026

Summary by CodeRabbit

  • Tests
    • Removed a specific test configuration for model performance testing, reducing the scope of coverage for certain hardware and model configurations.

Description

No doc indiacte qwen3 should use trtllm moe backend, remove it.
TRTLLM moe backend cause RTX-6000 runtime issue.

Auto can detect trtllm correctly, no need it explictly.

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

Signed-off-by: yufeiwu-nv <230315618+yufeiwu-nv@users.noreply.github.com>
Signed-off-by: yufeiwu-nv <230315618+yufeiwu-nv@users.noreply.github.com>
@yufeiwu-nv yufeiwu-nv requested a review from a team as a code owner April 9, 2026 08:12
@yufeiwu-nv yufeiwu-nv changed the title Fix rtx with wrong moe backend [][test] Fix rtx with wrong moe backend Apr 9, 2026
@yufeiwu-nv yufeiwu-nv changed the title [][test] Fix rtx with wrong moe backend [https://nvbugs/6055474][test] Fix rtx with wrong moe backend Apr 9, 2026
@yufeiwu-nv yufeiwu-nv changed the title [https://nvbugs/6055474][test] Fix rtx with wrong moe backend [https://nvbugs/6055474][test] Fix RTX-6000 with wrong moe backend Apr 9, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 9, 2026

📝 Walkthrough

Walkthrough

A pattern-specific configuration block for Qwen3 235B A22B FP4 on B200 with MoE backend was removed from the test configuration file. This deletion removes settings for enable_attention_dp and moe_config.backend that previously applied to that model pattern.

Changes

Cohort / File(s) Summary
Test Configuration
tests/integration/defs/perf/pytorch_model_config.py
Removed 12-line configuration block for Qwen3 235B A22B FP4 model pattern under 8-GPU/8-expert setup, eliminating enable_attention_dp: False and TRTLLM backend overrides for that configuration.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~5 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning PR description is incomplete and lacks clarity. Missing PR title format, vague technical explanation, and Test Coverage section is empty. Add proper PR title following [type] format, clarify the technical rationale for removing the Qwen3 config, and specify which tests validate this change.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately identifies the main change: removing an incorrect MoE backend configuration for RTX-6000/Qwen3 model.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Signed-off-by: yufeiwu-nv <230315618+yufeiwu-nv@users.noreply.github.com>
@yufeiwu-nv
Copy link
Copy Markdown
Collaborator Author

/bot skip --comment "only test list modify"

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42630 [ skip ] triggered by Bot. Commit: 5d28b06 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42630 [ skip ] completed with state SUCCESS. Commit: 5d28b06
Skipping testing for commit 5d28b06

Link to invocation

@yufeiwu-nv yufeiwu-nv merged commit d71e880 into NVIDIA:main Apr 10, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants